WorldWideScience

Sample records for advanced computational methods

  1. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  2. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  3. Transonic wing analysis using advanced computational methods

    Science.gov (United States)

    Henne, P. A.; Hicks, R. M.

    1978-01-01

    This paper discusses the application of three-dimensional computational transonic flow methods to several different types of transport wing designs. The purpose of these applications is to evaluate the basic accuracy and limitations associated with such numerical methods. The use of such computational methods for practical engineering problems can only be justified after favorable evaluations are completed. The paper summarizes a study of both the small-disturbance and the full potential technique for computing three-dimensional transonic flows. Computed three-dimensional results are compared to both experimental measurements and theoretical results. Comparisons are made not only of pressure distributions but also of lift and drag forces. Transonic drag rise characteristics are compared. Three-dimensional pressure distributions and aerodynamic forces, computed from the full potential solution, compare reasonably well with experimental results for a wide range of configurations and flow conditions.

  4. Advanced Methods and Applications in Computational Intelligence

    CERN Document Server

    Nikodem, Jan; Jacak, Witold; Chaczko, Zenon; ACASE 2012

    2014-01-01

    This book offers an excellent presentation of intelligent engineering and informatics foundations for researchers in this field as well as many examples with industrial application. It contains extended versions of selected papers presented at the inaugural ACASE 2012 Conference dedicated to the Applications of Systems Engineering. This conference was held from the 6th to the 8th of February 2012, at the University of Technology, Sydney, Australia, organized by the University of Technology, Sydney (Australia), Wroclaw University of Technology (Poland) and the University of Applied Sciences in Hagenberg (Austria). The  book is organized into three main parts. Part I contains papers devoted to the heuristic approaches that are applicable in situations where the problem cannot be solved by exact methods, due to various characteristics or  dimensionality problems. Part II covers essential issues of the network management, presents intelligent models of the next generation of networks and distributed systems ...

  5. NATO Advanced Study Institute on Methods in Computational Molecular Physics

    CERN Document Server

    Diercksen, Geerd

    1992-01-01

    This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...

  6. Development of advanced nodal diffusion methods for modern computer architectures

    International Nuclear Information System (INIS)

    A family of highly efficient multidimensional multigroup advanced neutron-diffusion nodal methods, ILLICO, were implemented on sequential, vector, and vector-concurrent computers. Three-dimensional realistic benchmark problems can be solved in vectorized mode in less than 0.73 s (33.86 Mflops) on a Cray X-MP/48. Vector-concurrent implementations yield speedups as high as 9.19 on an Alliant FX/8. These results show that the ILLICO method preserves essentially all of its speed advantage over finite-difference methods. A self-consistent higher-order nodal diffusion method was developed and implemented. Nodal methods for global nuclear reactor multigroup diffusion calculations which account explicitly for heterogeneities in the assembly nuclear properties were developed and evaluated. A systematic analysis of the zero-order variable cross section nodal method was conducted. Analyzing the KWU PWR depletion benchmark problem, it is shown that when burnup heterogeneities arise, ordinary nodal methods, which do not explicitly treat the heterogeneities, suffer a significant systematic error that accumulates. A nodal method that treats explicitly the space dependence of diffusion coefficients was developed and implemented. A consistent burnup-correction method for nodal microscopic depletion analysis was developed

  7. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  8. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  9. Advanced Computational Methods for Thermal Radiative Heat Transfer.

    Energy Technology Data Exchange (ETDEWEB)

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.; Hogan, Roy E.,

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.

  10. Advanced Computational Methods for Thermal Radiative Heat Transfer

    Energy Technology Data Exchange (ETDEWEB)

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.; Hogan, Roy E.,

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.

  11. Advanced computational tools and methods for nuclear analyses of fusion technology systems

    International Nuclear Information System (INIS)

    An overview is presented of advanced computational tools and methods developed recently for nuclear analyses of Fusion Technology systems such as the experimental device ITER ('International Thermonuclear Experimental Reactor') and the intense neutron source IFMIF ('International Fusion Material Irradiation Facility'). These include Monte Carlo based computational schemes for the calculation of three-dimensional shut-down dose rate distributions, methods, codes and interfaces for the use of CAD geometry models in Monte Carlo transport calculations, algorithms for Monte Carlo based sensitivity/uncertainty calculations, as well as computational techniques and data for IFMIF neutronics and activation calculations. (author)

  12. Advanced computational methods for the assessment of reactor core behaviour during reactivity initiated accidents. Final report

    International Nuclear Information System (INIS)

    The document at hand serves as the final report for the reactor safety research project RS1183 ''Advanced Computational Methods for the Assessment of Reactor Core Behavior During Reactivity-Initiated Accidents''. The work performed in the framework of this project was dedicated to the development, validation and application of advanced computational methods for the simulation of transients and accidents of nuclear installations. These simulation tools describe in particular the behavior of the reactor core (with respect to neutronics, thermal-hydraulics and thermal mechanics) at a very high level of detail. The overall goal of this project was the deployment of a modern nuclear computational chain which provides, besides advanced 3D tools for coupled neutronics/ thermal-hydraulics full core calculations, also appropriate tools for the generation of multi-group cross sections and Monte Carlo models for the verification of the individual calculational steps. This computational chain shall primarily be deployed for light water reactors (LWR), but should beyond that also be applicable for innovative reactor concepts. Thus, validation on computational benchmarks and critical experiments was of paramount importance. Finally, appropriate methods for uncertainty and sensitivity analysis were to be integrated into the computational framework, in order to assess and quantify the uncertainties due to insufficient knowledge of data, as well as due to methodological aspects.

  13. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    Energy Technology Data Exchange (ETDEWEB)

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  14. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    Science.gov (United States)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  15. Recent advances in computational methods and clinical applications for spine imaging

    CERN Document Server

    Glocker, Ben; Klinder, Tobias; Li, Shuo

    2015-01-01

    This book contains the full papers presented at the MICCAI 2014 workshop on Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together scientists and clinicians in the field of computational spine imaging. The chapters included in this book present and discuss the new advances and challenges in these fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modeling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis. The book also includes papers and reports from the first challenge on vertebra segmentation held at the workshop.

  16. Computational methods to extract meaning from text and advance theories of human cognition.

    Science.gov (United States)

    McNamara, Danielle S

    2011-01-01

    Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA.

  17. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions

    International Nuclear Information System (INIS)

    The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high frequency behavior of longitudinal and transverse coupling impedances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides

  18. Advances in Numerical Methods

    CERN Document Server

    Mastorakis, Nikos E

    2009-01-01

    Features contributions that are focused on significant aspects of current numerical methods and computational mathematics. This book carries chapters that advanced methods and various variations on known techniques that can solve difficult scientific problems efficiently.

  19. Computing methods

    CERN Document Server

    Berezin, I S

    1965-01-01

    Computing Methods, Volume 2 is a five-chapter text that presents the numerical methods of solving sets of several mathematical equations. This volume includes computation sets of linear algebraic equations, high degree equations and transcendental equations, numerical methods of finding eigenvalues, and approximate methods of solving ordinary differential equations, partial differential equations and integral equations.The book is intended as a text-book for students in mechanical mathematical and physics-mathematical faculties specializing in computer mathematics and persons interested in the

  20. Advances in computers

    CERN Document Server

    Memon, Atif

    2012-01-01

    Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of sugnificant, lasting value in this rapidly expanding field. In-depth surveys and tutorials on new computer technologyWell-known authors and researchers in the fieldExtensive bibliographies with m

  1. BOOK REVIEW: Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming

    Science.gov (United States)

    Katsaounis, T. D.

    2005-02-01

    The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using

  2. Computational methods in the prediction of advanced subsonic and supersonic propeller induced noise: ASSPIN users' manual

    Science.gov (United States)

    Dunn, M. H.; Tarkenton, G. M.

    1992-01-01

    This document describes the computational aspects of propeller noise prediction in the time domain and the use of high speed propeller noise prediction program ASSPIN (Advanced Subsonic and Supersonic Propeller Induced Noise). These formulations are valid in both the near and far fields. Two formulations are utilized by ASSPIN: (1) one is used for subsonic portions of the propeller blade; and (2) the second is used for transonic and supersonic regions on the blade. Switching between the two formulations is done automatically. ASSPIN incorporates advanced blade geometry and surface pressure modelling, adaptive observer time grid strategies, and contains enhanced numerical algorithms that result in reduced computational time. In addition, the ability to treat the nonaxial inflow case has been included.

  3. Advances in Computers

    CERN Document Server

    Zelkowitz, Marvin

    2010-01-01

    This is volume 79 of Advances in Computers. This series, which began publication in 1960, is the oldest continuously published anthology that chronicles the ever- changing information technology field. In these volumes we publish from 5 to 7 chapters, three times per year, that cover the latest changes to the design, development, use and implications of computer technology on society today. Covers the full breadth of innovations in hardware, software, theory, design, and applications.Many of the in-depth reviews have become standard references that co

  4. Recent advances in computational optimization

    CERN Document Server

    2013-01-01

    Optimization is part of our everyday life. We try to organize our work in a better way and optimization occurs in minimizing time and cost or the maximization of the profit, quality and efficiency. Also many real world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization. This book presents recent advances in computational optimization. The volume includes important real world problems like parameter settings for con- trolling processes in bioreactor, robot skin wiring, strip packing, project scheduling, tuning of PID controller and so on. Some of them can be solved by applying traditional numerical methods, but others need a huge amount of computational resources. For them it is shown that is appropriate to develop algorithms based on metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming etc...

  5. Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems

    Science.gov (United States)

    Martin, W. R.

    1993-01-01

    This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

  6. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    Science.gov (United States)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  7. Advanced computations in plasma physics

    International Nuclear Information System (INIS)

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to

  8. Advanced differential quadrature methods

    CERN Document Server

    Zong, Zhi

    2009-01-01

    Modern Tools to Perform Numerical DifferentiationThe original direct differential quadrature (DQ) method has been known to fail for problems with strong nonlinearity and material discontinuity as well as for problems involving singularity, irregularity, and multiple scales. But now researchers in applied mathematics, computational mechanics, and engineering have developed a range of innovative DQ-based methods to overcome these shortcomings. Advanced Differential Quadrature Methods explores new DQ methods and uses these methods to solve problems beyond the capabilities of the direct DQ method.After a basic introduction to the direct DQ method, the book presents a number of DQ methods, including complex DQ, triangular DQ, multi-scale DQ, variable order DQ, multi-domain DQ, and localized DQ. It also provides a mathematical compendium that summarizes Gauss elimination, the Runge-Kutta method, complex analysis, and more. The final chapter contains three codes written in the FORTRAN language, enabling readers to q...

  9. Direct methods for limit and shakedown analysis of structures advanced computational algorithms and material modelling

    CERN Document Server

    Pisano, Aurora; Weichert, Dieter

    2015-01-01

    Articles in this book examine various materials and how to determine directly the limit state of a structure, in the sense of limit analysis and shakedown analysis. Apart from classical applications in mechanical and civil engineering contexts, the book reports on the emerging field of material design beyond the elastic limit, which has further industrial design and technological applications. Readers will discover that “Direct Methods” and the techniques presented here can in fact be used to numerically estimate the strength of structured materials such as composites or nano-materials, which represent fruitful fields of future applications.   Leading researchers outline the latest computational tools and optimization techniques and explore the possibility of obtaining information on the limit state of a structure whose post-elastic loading path and constitutive behavior are not well defined or well known. Readers will discover how Direct Methods allow rapid and direct access to requested information in...

  10. Developing advanced X-ray scattering methods combined with crystallography and computation.

    Science.gov (United States)

    Perry, J Jefferson P; Tainer, John A

    2013-03-01

    The extensive use of small angle X-ray scattering (SAXS) over the last few years is rapidly providing new insights into protein interactions, complex formation and conformational states in solution. This SAXS methodology allows for detailed biophysical quantification of samples of interest. Initial analyses provide a judgment of sample quality, revealing the potential presence of aggregation, the overall extent of folding or disorder, the radius of gyration, maximum particle dimensions and oligomerization state. Structural characterizations include ab initio approaches from SAXS data alone, and when combined with previously determined crystal/NMR, atomistic modeling can further enhance structural solutions and assess validity. This combination can provide definitions of architectures, spatial organizations of protein domains within a complex, including those not determined by crystallography or NMR, as well as defining key conformational states of a protein interaction. SAXS is not generally constrained by macromolecule size, and the rapid collection of data in a 96-well plate format provides methods to screen sample conditions. This includes screening for co-factors, substrates, differing protein or nucleotide partners or small molecule inhibitors, to more fully characterize the variations within assembly states and key conformational changes. Such analyses may be useful for screening constructs and conditions to determine those most likely to promote crystal growth of a complex under study. Moreover, these high throughput structural determinations can be leveraged to define how polymorphisms affect assembly formations and activities. This is in addition to potentially providing architectural characterizations of complexes and interactions for systems biology-based research, and distinctions in assemblies and interactions in comparative genomics. Thus, SAXS combined with crystallography/NMR and computation provides a unique set of tools that should be considered

  11. Advances in physiological computing

    CERN Document Server

    Fairclough, Stephen H

    2014-01-01

    This edited collection will provide an overview of the field of physiological computing, i.e. the use of physiological signals as input for computer control. It will cover a breadth of current research, from brain-computer interfaces to telemedicine.

  12. NATO Advanced Research Workshop on Computational Methods for Polymers and Liquid Crystalline Polymers

    CERN Document Server

    Pasini, Paolo; Žumer, Slobodan; Computer Simulations of Liquid Crystals and Polymers

    2005-01-01

    Liquid crystals, polymers and polymer liquid crystals are soft condensed matter systems of major technological and scientific interest. An understanding of the macroscopic properties of these complex systems and of their many and interesting peculiarities at the molecular level can nowadays only be attained using computer simulations and statistical mechanical theories. Both in the Liquid Crystal and Polymer fields a considerable amount of simulation work has been done in the last few years with various classes of models at different special resolutions, ranging from atomistic to molecular and coarse-grained lattice models. Each of the two fields has developed its own set of tools and specialized procedures and the book aims to provide a state of the art review of the computer simulation studies of polymers and liquid crystals. This is of great importance in view of a potential cross-fertilization between these connected areas which is particularly apparent for a number of experimental systems like, e.g. poly...

  13. Advanced methods for the quantification of trabecular bone structure and density in micro computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Jing

    2011-07-01

    Bone remodeling is a life long process composed of bone formation and resorption. Imbalance between bone formation and resorption is a cause of metabolic bone diseases. Thus, the understanding of factors that affect the remodeling balance is of great importance. Conventionally bone structure is measured using histomorphometry of thin stained sections which is destructive and non-reproducible. In contrast, volumetric microcomputed tomography ({mu}CT) imaging is a powerful tool for quantifying bone quality of small samples non-destructively. The aim of this thesis is to develop an analysis tool to quantify trabecular bone of mouse tibiae with high efficiency, accuracy and reproducibility. Materials and Methods: The trabecular volume of interest (VOI) definition in the proximal metaphysis of mouse tibiae includes three segmentation steps: the periosteal surface, the primary spongiosa and the proximal metaphysis. All these segmentation algorithms are hybrid volume growing-based approaches including automatic threshold estimation, volume growing with different criterions and combined morphological operations. To preserve the connectivity of the trabecular network volume growing with local adaptive thresholding (LAT) is used for the segmentation of the trabeculae. In order to accelerate this process the algorithm is only applied to voxels with gray values in an interval defined by two global thresholds. These are automatically determined and depend on the voxel-to-object-size ratio of the dataset. Standard bone structural parameters were implemented [29, 30, 62]. For the assessment of tissue mineral density (TMD), a calibration phantom made of epoxy resin-based material with two hydroxyapatite (HA) inserts was developed. Experiments were performed with the {mu}CT FORBILD scanner of the IMP to validate the homogeneity of the phantom inserts, the water equivalence of the epoxy resin-based plastic, the effect of beam hardening and the stability of the {mu}CT calibration

  14. Recent Advances in Evolutionary Computation

    Institute of Scientific and Technical Information of China (English)

    Xin Yao; Yong Xu

    2006-01-01

    Evolutionary computation has experienced a tremendous growth in the last decade in both theoretical analyses and industrial applications. Its scope has evolved beyond its original meaning of "biological evolution" toward a wide variety of nature inspired computational algorithms and techniques, including evolutionary, neural, ecological, social and economical computation, etc., in a unified framework. Many research topics in evolutionary computation nowadays are not necessarily "evolutionary". This paper provides an overview of some recent advances in evolutionary computation that have been made in CERCIA at the University of Birmingham, UK. It covers a wide range of topics in optimization, learning and design using evolutionary approaches and techniques, and theoretical results in the computational time complexity of evolutionary algorithms. Some issues related to future development of evolutionary computation are also discussed.

  15. Advanced computational biology methods identify molecular switches for malignancy in an EGF mouse model of liver cancer.

    Directory of Open Access Journals (Sweden)

    Philip Stegmaier

    Full Text Available The molecular causes by which the epidermal growth factor receptor tyrosine kinase induces malignant transformation are largely unknown. To better understand EGFs' transforming capacity whole genome scans were applied to a transgenic mouse model of liver cancer and subjected to advanced methods of computational analysis to construct de novo gene regulatory networks based on a combination of sequence analysis and entrained graph-topological algorithms. Here we identified transcription factors, processes, key nodes and molecules to connect as yet unknown interacting partners at the level of protein-DNA interaction. Many of those could be confirmed by electromobility band shift assay at recognition sites of gene specific promoters and by western blotting of nuclear proteins. A novel cellular regulatory circuitry could therefore be proposed that connects cell cycle regulated genes with components of the EGF signaling pathway. Promoter analysis of differentially expressed genes suggested the majority of regulated transcription factors to display specificity to either the pre-tumor or the tumor state. Subsequent search for signal transduction key nodes upstream of the identified transcription factors and their targets suggested the insulin-like growth factor pathway to render the tumor cells independent of EGF receptor activity. Notably, expression of IGF2 in addition to many components of this pathway was highly upregulated in tumors. Together, we propose a switch in autocrine signaling to foster tumor growth that was initially triggered by EGF and demonstrate the knowledge gain form promoter analysis combined with upstream key node identification.

  16. The finite volume method in computational fluid dynamics an advanced introduction with OpenFOAM and Matlab

    CERN Document Server

    Moukalled, F; Darwish, M

    2016-01-01

    This textbook explores both the theoretical foundation of the Finite Volume Method (FVM) and its applications in Computational Fluid Dynamics (CFD). Readers will discover a thorough explanation of the FVM numerics and algorithms used for the simulation of incompressible and compressible fluid flows, along with a detailed examination of the components needed for the development of a collocated unstructured pressure-based CFD solver. Two particular CFD codes are explored. The first is uFVM, a three-dimensional unstructured pressure-based finite volume academic CFD code, implemented within Matlab. The second is OpenFOAM®, an open source framework used in the development of a range of CFD programs for the simulation of industrial scale flow problems. With over 220 figures, numerous examples and more than one hundred exercise on FVM numerics, programming, and applications, this textbook is suitable for use in an introductory course on the FVM, in an advanced course on numerics, and as a reference for CFD programm...

  17. Advanced computational methods for the assessment of reactor core behaviour during reactivity initiated accidents. Final report; Fortschrittliche Rechenmethoden zum Kernverhalten bei Reaktivitaetsstoerfaellen. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Pautz, A.; Perin, Y.; Pasichnyk, I.; Velkov, K.; Zwermann, W.; Seubert, A.; Klein, M.; Gallner, L.; Krzycacz-Hausmann, B.

    2012-05-15

    The document at hand serves as the final report for the reactor safety research project RS1183 ''Advanced Computational Methods for the Assessment of Reactor Core Behavior During Reactivity-Initiated Accidents''. The work performed in the framework of this project was dedicated to the development, validation and application of advanced computational methods for the simulation of transients and accidents of nuclear installations. These simulation tools describe in particular the behavior of the reactor core (with respect to neutronics, thermal-hydraulics and thermal mechanics) at a very high level of detail. The overall goal of this project was the deployment of a modern nuclear computational chain which provides, besides advanced 3D tools for coupled neutronics/ thermal-hydraulics full core calculations, also appropriate tools for the generation of multi-group cross sections and Monte Carlo models for the verification of the individual calculational steps. This computational chain shall primarily be deployed for light water reactors (LWR), but should beyond that also be applicable for innovative reactor concepts. Thus, validation on computational benchmarks and critical experiments was of paramount importance. Finally, appropriate methods for uncertainty and sensitivity analysis were to be integrated into the computational framework, in order to assess and quantify the uncertainties due to insufficient knowledge of data, as well as due to methodological aspects.

  18. Advances and trends in computational structures technology

    Science.gov (United States)

    Noor, A. K.; Venneri, S. L.

    1990-01-01

    The major goals of computational structures technology (CST) are outlined, and recent advances in CST are examined. These include computational material modeling, stochastic-based modeling, computational methods for articulated structural dynamics, strategies and numerical algorithms for new computing systems, multidisciplinary analysis and optimization. The role of CST in the future development of structures technology and the multidisciplinary design of future flight vehicles is addressed, and the future directions of CST research in the prediction of failures of structural components, the solution of large-scale structural problems, and quality assessment and control of numerical simulations are discussed.

  19. International Conference on Advanced Computing

    CERN Document Server

    Patnaik, Srikanta

    2014-01-01

    This book is composed of the Proceedings of the International Conference on Advanced Computing, Networking, and Informatics (ICACNI 2013), held at Central Institute of Technology, Raipur, Chhattisgarh, India during June 14–16, 2013. The book records current research articles in the domain of computing, networking, and informatics. The book presents original research articles, case-studies, as well as review articles in the said field of study with emphasis on their implementation and practical application. Researchers, academicians, practitioners, and industry policy makers around the globe have contributed towards formation of this book with their valuable research submissions.

  20. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  1. Analogue computing methods

    CERN Document Server

    Welbourne, D

    1965-01-01

    Analogue Computing Methods presents the field of analogue computation and simulation in a compact and convenient form, providing an outline of models and analogues that have been produced to solve physical problems for the engineer and how to use and program the electronic analogue computer. This book consists of six chapters. The first chapter provides an introduction to analogue computation and discusses certain mathematical techniques. The electronic equipment of an analogue computer is covered in Chapter 2, while its use to solve simple problems, including the method of scaling is elaborat

  2. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface

  3. Soft computing in advanced robotics

    CERN Document Server

    Kobayashi, Ichiro; Kim, Euntai

    2014-01-01

    Intelligent system and robotics are inevitably bound up; intelligent robots makes embodiment of system integration by using the intelligent systems. We can figure out that intelligent systems are to cell units, while intelligent robots are to body components. The two technologies have been synchronized in progress. Making leverage of the robotics and intelligent systems, applications cover boundlessly the range from our daily life to space station; manufacturing, healthcare, environment, energy, education, personal assistance, logistics. This book aims at presenting the research results in relevance with intelligent robotics technology. We propose to researchers and practitioners some methods to advance the intelligent systems and apply them to advanced robotics technology. This book consists of 10 contributions that feature mobile robots, robot emotion, electric power steering, multi-agent, fuzzy visual navigation, adaptive network-based fuzzy inference system, swarm EKF localization and inspection robot. Th...

  4. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  5. Computational Methods Development at Ames

    Science.gov (United States)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  6. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    Science.gov (United States)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization

  7. Computational Aerodynamic Simulations of an 840 ft/sec Tip Speed Advanced Ducted Propulsor Fan System Model for Acoustic Methods Assessment and Development

    Science.gov (United States)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of an 840 ft/sec tip speed, Advanced Ducted Propulsor fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, lownoise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15- foot Low Speed Wind Tunnel at the NASA Glenn Research Center, resulting in quality, detailed aerodynamic and acoustic measurement data. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating conditions simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, excluding a long core duct section downstream of the core inlet guide vane. As a result, only fan rotational speed and system bypass ratio, set by specifying static pressure downstream of the core inlet guide vane row, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. The computed blade row flow fields for all five fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive boundary layer separations or related secondary-flow problems. A few spanwise comparisons between

  8. Advanced Computational Methods for Knowledge Engineering - Proceedings of the 4th International Conference on Computer Science, Applied Mathematics and Applications, ICCSAMA 2016, 2-3 May, 2016, Vienna, Austria

    OpenAIRE

    2016-01-01

    This proceedings consists of 20 papers which have been selected and invited from the submissions to the 4th International Conference on Computer Science, Applied Mathematics and Applications (ICCSAMA 2016) held on 2-3 May, 2016 in Laxenburg, Austria. The conference is organized into 5 sessions: Advanced Optimization Methods and Their Applications, Models for ICT applications, Topics on discrete mathematics, Data Analytic Methods and Applications and Feature Extractio, respectively. All chap...

  9. Advanced Computer Algebra for Determinants

    CERN Document Server

    Koutschan, Christoph

    2011-01-01

    We prove three conjectures concerning the evaluation of determinants, which are related to the counting of plane partitions and rhombus tilings. One of them has been posed by George Andrews in 1980, the other two are by Guoce Xin and Christian Krattenthaler. Our proofs employ computer algebra methods, namely the holonomic ansatz proposed by Doron Zeilberger and variations thereof. These variations make Zeilberger's original approach even more powerful and allow for addressing a wider variety of determinants. Finally we present, as a challenge problem, a conjecture about a closed form evaluation of Andrews's determinant.

  10. Computational Design of Advanced Nuclear Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Savrasov, Sergey [Univ. of California, Davis, CA (United States); Kotliar, Gabriel [Rutgers Univ., Piscataway, NJ (United States); Haule, Kristjan [Rutgers Univ., Piscataway, NJ (United States)

    2014-06-03

    The objective of the project was to develop a method for theoretical understanding of nuclear fuel materials whose physical and thermophysical properties can be predicted from first principles using a novel dynamical mean field method for electronic structure calculations. We concentrated our study on uranium, plutonium, their oxides, nitrides, carbides, as well as some rare earth materials whose 4f eletrons provide a simplified framework for understanding complex behavior of the f electrons. We addressed the issues connected to the electronic structure, lattice instabilities, phonon and magnon dynamics as well as thermal conductivity. This allowed us to evaluate characteristics of advanced nuclear fuel systems using computer based simulations and avoid costly experiments.

  11. Advances in Computer Science and Engineering

    CERN Document Server

    Second International Conference on Advances in Computer Science and Engineering (CES 2012)

    2012-01-01

    This book includes the proceedings of the second International Conference on Advances in Computer Science and Engineering (CES 2012), which was held during January 13-14, 2012 in Sanya, China. The papers in these proceedings of CES 2012 focus on the researchers’ advanced works in their fields of Computer Science and Engineering mainly organized in four topics, (1) Software Engineering, (2) Intelligent Computing, (3) Computer Networks, and (4) Artificial Intelligence Software.

  12. Advanced in Computer Science and its Applications

    CERN Document Server

    Yen, Neil; Park, James; CSA 2013

    2014-01-01

    The theme of CSA is focused on the various aspects of computer science and its applications for advances in computer science and its applications and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of computer science and its applications. Therefore this book will be include the various theories and practical applications in computer science and its applications.

  13. International Conference on Advanced Computing for Innovation

    CERN Document Server

    Angelova, Galia; Agre, Gennady

    2016-01-01

    This volume is a selected collection of papers presented and discussed at the International Conference “Advanced Computing for Innovation (AComIn 2015)”. The Conference was held at 10th -11th of November, 2015 in Sofia, Bulgaria and was aimed at providing a forum for international scientific exchange between Central/Eastern Europe and the rest of the world on several fundamental topics of computational intelligence. The papers report innovative approaches and solutions in hot topics of computational intelligence – advanced computing, language and semantic technologies, signal and image processing, as well as optimization and intelligent control.

  14. Bringing Advanced Computational Techniques to Energy Research

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  15. Advanced laptop and small personal computer technology

    Science.gov (United States)

    Johnson, Roger L.

    1991-01-01

    Advanced laptop and small personal computer technology is presented in the form of the viewgraphs. The following areas of hand carried computers and mobile workstation technology are covered: background, applications, high end products, technology trends, requirements for the Control Center application, and recommendations for the future.

  16. Advanced Biomedical Computing Center (ABCC) | DSITP

    Science.gov (United States)

    The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.

  17. Advance Trends in Soft Computing

    CERN Document Server

    Kreinovich, Vladik; Kacprzyk, Janusz; WCSC 2013

    2014-01-01

    This book is the proceedings of the 3rd World Conference on Soft Computing (WCSC), which was held in San Antonio, TX, USA, on December 16-18, 2013. It presents start-of-the-art theory and applications of soft computing together with an in-depth discussion of current and future challenges in the field, providing readers with a 360 degree view on soft computing. Topics range from fuzzy sets, to fuzzy logic, fuzzy mathematics, neuro-fuzzy systems, fuzzy control, decision making in fuzzy environments, image processing and many more. The book is dedicated to Lotfi A. Zadeh, a renowned specialist in signal analysis and control systems research who proposed the idea of fuzzy sets, in which an element may have a partial membership, in the early 1960s, followed by the idea of fuzzy logic, in which a statement can be true only to a certain degree, with degrees described by numbers in the interval [0,1]. The performance of fuzzy systems can often be improved with the help of optimization techniques, e.g. evolutionary co...

  18. Recent Advances in Computational Conformal Geometry

    OpenAIRE

    Gu, Xianfeng David; Luo, Feng; Yau, Shing-Tung

    2009-01-01

    Computational conformal geometry focuses on developing the computational methodologies on discrete surfaces to discover conformal geometric invariants. In this work, we briefly summarize the recent developments for methods and related applications in computational conformal geometry. There are two major approaches, holomorphic differentials and curvature flow. Holomorphic differential method is a linear method, which is more efficient and robust to triangulations with lower qua...

  19. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  20. Computational methods for fluid dynamics

    CERN Document Server

    Ferziger, Joel H

    2002-01-01

    In its 3rd revised and extended edition the book offers an overview of the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. Included are advanced methods in computational fluid dynamics, like direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, free surface flows. The 3rd edition contains a new section dealing with grid quality and an extended description of discretization methods. The book shows common roots and basic principles for many different methods. The book also contains a great deal of practical advice for code developers and users, it is designed to be equally useful to beginners and experts. The issues of numerical accuracy, estimation and reduction of numerical errors are dealt with in detail, with many examples. A full-feature user-friendly demo-version of a commercial CFD software has been added, which ca...

  1. Computational Methods for Crashworthiness

    Science.gov (United States)

    Noor, Ahmed K. (Compiler); Carden, Huey D. (Compiler)

    1993-01-01

    Presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Crashworthiness held at Langley Research Center on 2-3 Sep. 1992 are included. The presentations addressed activities in the area of impact dynamics. Workshop attendees represented NASA, the Army and Air Force, the Lawrence Livermore and Sandia National Laboratories, the aircraft and automotive industries, and academia. The workshop objectives were to assess the state-of-technology in the numerical simulation of crash and to provide guidelines for future research.

  2. Advanced topics in computer vision

    CERN Document Server

    Farinella, Giovanni Maria; Cipolla, Roberto

    2013-01-01

    This book presents a broad selection of cutting-edge research, covering both theoretical and practical aspects of reconstruction, registration, and recognition. The text provides an overview of challenging areas and descriptions of novel algorithms. Features: investigates visual features, trajectory features, and stereo matching; reviews the main challenges of semi-supervised object recognition, and a novel method for human action categorization; presents a framework for the visual localization of MAVs, and for the use of moment constraints in convex shape optimization; examines solutions to t

  3. Advances in energy harvesting methods

    CERN Document Server

    Elvin, Niell

    2012-01-01

    Advances in Energy Harvesting Methods presents a state-of-the-art understanding of diverse aspects of energy harvesting with a focus on: broadband energy conversion, new concepts in electronic circuits, and novel materials. This book covers recent advances in energy harvesting using different transduction mechanisms; these include methods of performance enhancement using nonlinear effects, non-harmonic forms of excitation and non-resonant energy harvesting, fluidic energy harvesting, and advances in both low-power electronics as well as  material science. The contributors include a brief liter

  4. Computational electromagnetics recent advances and engineering applications

    CERN Document Server

    2014-01-01

    Emerging Topics in Computational Electromagnetics in Computational Electromagnetics presents advances in Computational Electromagnetics. This book is designed to fill the existing gap in current CEM literature that only cover the conventional numerical techniques for solving traditional EM problems. The book examines new algorithms, and applications of these algorithms for solving problems of current interest that are not readily amenable to efficient treatment by using the existing techniques. The authors discuss solution techniques for problems arising in nanotechnology, bioEM, metamaterials, as well as multiscale problems. They present techniques that utilize recent advances in computer technology, such as parallel architectures, and the increasing need to solve large and complex problems in a time efficient manner by using highly scalable algorithms.

  5. Managing Security in Advanced Computational Infrastructure

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Proposed by Education Ministry of China, Advanced Computational Infrastructure (ACI) aims at sharing geographically distributed high-performance computing and huge-capacity data resource among the universities of China. With the fast development of large-scale applications in ACI, the security requirements become more and more urgent. The special security needs in ACI is first analyzed in this paper, and security management system based on ACI is presented. Finally, the realization of security management system is discussed.

  6. Advances and Challenges in Computational Plasma Science

    Energy Technology Data Exchange (ETDEWEB)

    W.M. Tang; V.S. Chan

    2005-01-03

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behavior. Recent advances in simulations of magnetically-confined plasmas are reviewed in this paper with illustrative examples chosen from associated research areas such as microturbulence, magnetohydrodynamics, and other topics. Progress has been stimulated in particular by the exponential growth of computer speed along with significant improvements in computer technology.

  7. Computational Biology, Advanced Scientific Computing, and Emerging Computational Architectures

    Energy Technology Data Exchange (ETDEWEB)

    None

    2007-06-27

    This CRADA was established at the start of FY02 with $200 K from IBM and matching funds from DOE to support post-doctoral fellows in collaborative research between International Business Machines and Oak Ridge National Laboratory to explore effective use of emerging petascale computational architectures for the solution of computational biology problems. 'No cost' extensions of the CRADA were negotiated with IBM for FY03 and FY04.

  8. Computational Intelligence Paradigms in Advanced Pattern Classification

    CERN Document Server

    Jain, Lakhmi

    2012-01-01

    This monograph presents selected areas of application of pattern recognition and classification approaches including handwriting recognition, medical image analysis and interpretation, development of cognitive systems for image computer understanding, moving object detection, advanced image filtration and intelligent multi-object labelling and classification. It is directed to the scientists, application engineers, professors, professors and students will find this book useful.

  9. Advances in computers improving the web

    CERN Document Server

    Zelkowitz, Marvin

    2010-01-01

    This is volume 78 of Advances in Computers. This series, which began publication in 1960, is the oldest continuously published anthology that chronicles the ever- changing information technology field. In these volumes we publish from 5 to 7 chapters, three times per year, that cover the latest changes to the design, development, use and implications of computer technology on society today.Covers the full breadth of innovations in hardware, software, theory, design, and applications.Many of the in-depth reviews have become standard references that continue to be of significant, lasting value i

  10. Advanced computational approaches to biomedical engineering

    CERN Document Server

    Saha, Punam K; Basu, Subhadip

    2014-01-01

    There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig

  11. Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  12. Advances in Monte Carlo computer simulation

    Science.gov (United States)

    Swendsen, Robert H.

    2011-03-01

    Since the invention of the Metropolis method in 1953, Monte Carlo methods have been shown to provide an efficient, practical approach to the calculation of physical properties in a wide variety of systems. In this talk, I will discuss some of the advances in the MC simulation of thermodynamics systems, with an emphasis on optimization to obtain a maximum of useful information.

  13. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  14. Development and Application of Computational/In Vitro Toxicological Methods for Chemical Hazard Risk Reduction of New Materials for Advanced Weapon Systems

    Science.gov (United States)

    Frazier, John M.; Mattie, D. R.; Hussain, Saber; Pachter, Ruth; Boatz, Jerry; Hawkins, T. W.

    2000-01-01

    The development of quantitative structure-activity relationship (QSAR) is essential for reducing the chemical hazards of new weapon systems. The current collaboration between HEST (toxicology research and testing), MLPJ (computational chemistry) and PRS (computational chemistry, new propellant synthesis) is focusing R&D efforts on basic research goals that will rapidly transition to useful products for propellant development. Computational methods are being investigated that will assist in forecasting cellular toxicological end-points. Models developed from these chemical structure-toxicity relationships are useful for the prediction of the toxicological endpoints of new related compounds. Research is focusing on the evaluation tools to be used for the discovery of such relationships and the development of models of the mechanisms of action. Combinations of computational chemistry techniques, in vitro toxicity methods, and statistical correlations, will be employed to develop and explore potential predictive relationships; results for series of molecular systems that demonstrate the viability of this approach are reported. A number of hydrazine salts have been synthesized for evaluation. Computational chemistry methods are being used to elucidate the mechanism of action of these salts. Toxicity endpoints such as viability (LDH) and changes in enzyme activity (glutahoione peroxidase and catalase) are being experimentally measured as indicators of cellular damage. Extrapolation from computational/in vitro studies to human toxicity, is the ultimate goal. The product of this program will be a predictive tool to assist in the development of new, less toxic propellants.

  15. On computational methods for crashworthiness

    Science.gov (United States)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  16. Neutronics computational methods for cores

    International Nuclear Information System (INIS)

    This engineering-oriented publication contains a detailed presentation of neutronics computational methods for cores. More precisely, it presents neutronics equations: Boltzmann equation for neutron transport, resolution principles, use of high performance computing. The next parts present the problematic (values to be computed, computation software and methods), nuclear data and their processing. Then the authors describe the application of the Monte Carlo method to reactor physics: resolution of the transport equation by the Monte Carlo method, convergence of a Monte Carlo calculation and notion of quality factor, and software. Deterministic methods are then addressed: discretization, processing of resonant absorption, network calculations, core calculation, deterministic software, fuel evolution, and kinetics. The next chapter addresses multi-physical aspects: necessity of a coupling, principles of neutronic/thermal hydraulic coupling, example of an accidental transient. The last part addresses the checking approach, and neutronics computational code validation

  17. International Conference on Computers and Advanced Technology in Education

    CERN Document Server

    Advanced Information Technology in Education

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computers and Advanced Technology in Education. With the development of computers and advanced technology, the human social activities are changing basically. Education, especially the education reforms in different countries, has been experiencing the great help from the computers and advanced technology. Generally speaking, education is a field which needs more information, while the computers, advanced technology and internet are a good information provider. Also, with the aid of the computer and advanced technology, persons can make the education an effective combination. Therefore, computers and advanced technology should be regarded as an important media in the modern education. Volume Advanced Information Technology in Education is to provide a forum for researchers, educators, engineers, and government officials involved in the general areas of computers and advanced technology in education to d...

  18. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders;

    2015-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities in...

  19. Preconditioned method in parallel computation

    Institute of Scientific and Technical Information of China (English)

    Wu Ruichan; Wei Jianing

    2006-01-01

    The grid equations in decomposed domain by parallel computation are soled, and a method of local orthogonalization to solve the large-scaled numerical computation is presented. It constructs preconditioned iteration matrix by the combination of predigesting LU decomposition and local orthogonalization, and the convergence of solution is proved. Indicated from the example, this algorithm can increase the rate of computation efficiently and it is quite stable.

  20. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders;

    2014-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities in topolo......This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...

  1. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  2. Advanced proton imaging in computed tomography

    CERN Document Server

    Mattiazzo, S; Giubilato, P; Pantano, D; Pozzobon, N; Snoeys, W; Wyss, J

    2015-01-01

    In recent years the use of hadrons for cancer radiation treatment has grown in importance, and many facilities are currently operational or under construction worldwide. To fully exploit the therapeutic advantages offered by hadron therapy, precise body imaging for accurate beam delivery is decisive. Proton computed tomography (pCT) scanners, currently in their R&D phase, provide the ultimate 3D imaging for hadrons treatment guidance. A key component of a pCT scanner is the detector used to track the protons, which has great impact on the scanner performances and ultimately limits its maximum speed. In this article, a novel proton-tracking detector was presented that would have higher scanning speed, better spatial resolution and lower material budget with respect to present state-of-the-art detectors, leading to enhanced performances. This advancement in performances is achieved by employing the very latest development in monolithic active pixel detectors (to build high granularity, low material budget, ...

  3. Transport modeling and advanced computer techniques

    International Nuclear Information System (INIS)

    A workshop was held at the University of Texas in June 1988 to consider the current state of transport codes and whether improved user interfaces would make the codes more usable and accessible to the fusion community. Also considered was the possibility that a software standard could be devised to ease the exchange of routines between groups. It was noted that two of the major obstacles to exchanging routines now are the variety of geometrical representation and choices of units. While the workshop formulated no standards, it was generally agreed that good software engineering would aid in the exchange of routines, and that a continued exchange of ideas between groups would be worthwhile. It seems that before we begin to discuss software standards we should review the current state of computer technology --- both hardware and software to see what influence recent advances might have on our software goals. This is done in this paper

  4. Advanced Scientific Computing Research Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  5. Advanced methods of fatigue assessment

    CERN Document Server

    Radaj, Dieter

    2013-01-01

    The book in hand presents advanced methods of brittle fracture and fatigue assessment. The Neuber concept of fictitious notch rounding is enhanced with regard to theory and application. The stress intensity factor concept for cracks is extended to pointed and rounded corner notches as well as to locally elastic-plastic material behaviour. The averaged strain energy density within a circular sector volume around the notch tip is shown to be suitable for strength-assessments. Finally, the various implications of cyclic plasticity on fatigue crack growth are explained with emphasis being laid on the DJ-integral approach.   This book continues the expositions of the authors’ well known reference work in German language ‘Ermüdungsfestigkeit – Grundlagen für Ingenieure’ (Fatigue strength – fundamentals for engineers).

  6. Making Advanced Computer Science Topics More Accessible through Interactive Technologies

    Science.gov (United States)

    Shao, Kun; Maher, Peter

    2012-01-01

    Purpose: Teaching advanced technical concepts in a computer science program to students of different technical backgrounds presents many challenges. The purpose of this paper is to present a detailed experimental pedagogy in teaching advanced computer science topics, such as computer networking, telecommunications and data structures using…

  7. Computational Methods For Composite Structures

    Science.gov (United States)

    Chamis, Christos C.

    1988-01-01

    Selected methods of computation for simulation of mechanical behavior of fiber/matrix composite materials described in report. For each method, report describes significance of behavior to be simulated, procedure for simulation, and representative results. Following applications discussed: effects of progressive degradation of interply layers on responses of composite structures, dynamic responses of notched and unnotched specimens, interlaminar fracture toughness, progressive fracture, thermal distortions of sandwich composite structure, and metal-matrix composite structures for use at high temperatures. Methods demonstrate effectiveness of computational simulation as applied to complex composite structures in general and aerospace-propulsion structural components in particular.

  8. OPENING REMARKS: Scientific Discovery through Advanced Computing

    Science.gov (United States)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  9. TRAC-P1: an advanced best estimate computer program for PWR LOCA analysis. I. Methods, models, user information, and programming details

    International Nuclear Information System (INIS)

    The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos Scientific Laboratory (LASL) to provide an advanced ''best estimate'' predictive capability for the analysis of postulated accidents in light water reactors (LWRs). TRAC-Pl provides this analysis capability for pressurized water reactors (PWRs) and for a wide variety of thermal-hydraulic experimental facilities. It features a three-dimensional treatment of the pressure vessel and associated internals; two-phase nonequilibrium hydrodynamics models; flow-regime-dependent constitutive equation treatment; reflood tracking capability for both bottom flood and falling film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions. The TRAC-Pl User's Manual is composed of two separate volumes. Volume I gives a description of the thermal-hydraulic models and numerical solution methods used in the code. Detailed programming and user information is also provided. Volume II presents the results of the developmental verification calculations

  10. Computational Methods for Simulating Quantum Computers

    NARCIS (Netherlands)

    Raedt, H. De; Michielsen, K.

    2006-01-01

    This review gives a survey of numerical algorithms and software to simulate quantum computers. It covers the basic concepts of quantum computation and quantum algorithms and includes a few examples that illustrate the use of simulation software for ideal and physical models of quantum computers.

  11. Computational and instrumental methods in EPR

    CERN Document Server

    Bender, Christopher J

    2006-01-01

    Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...

  12. Activities of the Research Institute for Advanced Computer Science

    Science.gov (United States)

    Oliger, Joseph

    1994-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

  13. Computational structural mechanics methods research using an evolving framework

    Science.gov (United States)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  14. Condition Monitoring Through Advanced Sensor and Computational Technology

    International Nuclear Information System (INIS)

    The overall goal of this joint research project was to develop and demonstrate advanced sensors and computational technology for continuous monitoring of the condition of components, structures, and systems in advanced and next-generation nuclear power plants (NPPs). This project included investigating and adapting several advanced sensor technologies from Korean and US national laboratory research communities, some of which were developed and applied in non-nuclear industries. The project team investigated and developed sophisticated signal processing, noise reduction, and pattern recognition techniques and algorithms. The researchers installed sensors and conducted condition monitoring tests on two test loops, a check valve (an active component) and a piping elbow (a passive component), to demonstrate the feasibility of using advanced sensors and computational technology to achieve the project goal. Acoustic emission (AE) devices, optical fiber sensors, accelerometers, and ultrasonic transducers (UTs) were used to detect mechanical vibratory response of check valve and piping elbow in normal and degraded configurations. Chemical sensors were also installed to monitor the water chemistry in the piping elbow test loop. Analysis results of processed sensor data indicate that it is feasible to differentiate between the normal and degraded (with selected degradation mechanisms) configurations of these two components from the acquired sensor signals, but it is questionable that these methods can reliably identify the level and type of degradation. Additional research and development efforts are needed to refine the differentiation techniques and to reduce the level of uncertainties

  15. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  16. Advanced median method for timing jitter compensation

    Institute of Scientific and Technical Information of China (English)

    Wang Chen; Zhu Jiangmiao; Jan Verspecht; Liu Mingliang; Li Yang

    2008-01-01

    Timing jitter is one of the main factors that influence on the accuracy of time domain precision measurement. Timing jitter compensation is one of the problems people concern. Because of the flaws of median method, PDF deconvolution method and synthetic method, we put forward a new method for timing jitter compensation, which is called advanced median method. The theory of the advanced median method based on probability and statistics is analyzed, and the process of the advanced median method is summarized in this paper. Simulation and experiment show that compared with other methods, the new method could compensate timing jitter effectively.

  17. TRAC-P1: an advanced best estimate computer program for PWR LOCA analysis. I. Methods, models, user information, and programming details

    Energy Technology Data Exchange (ETDEWEB)

    1978-05-01

    The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos Scientific Laboratory (LASL) to provide an advanced ''best estimate'' predictive capability for the analysis of postulated accidents in light water reactors (LWRs). TRAC-Pl provides this analysis capability for pressurized water reactors (PWRs) and for a wide variety of thermal-hydraulic experimental facilities. It features a three-dimensional treatment of the pressure vessel and associated internals; two-phase nonequilibrium hydrodynamics models; flow-regime-dependent constitutive equation treatment; reflood tracking capability for both bottom flood and falling film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions. The TRAC-Pl User's Manual is composed of two separate volumes. Volume I gives a description of the thermal-hydraulic models and numerical solution methods used in the code. Detailed programming and user information is also provided. Volume II presents the results of the developmental verification calculations.

  18. Computational Methods in Stochastic Dynamics Volume 2

    CERN Document Server

    Stefanou, George; Papadopoulos, Vissarion

    2013-01-01

    The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology.   This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...

  19. Forecasting methods for computer technology

    Energy Technology Data Exchange (ETDEWEB)

    Worlton, W.J.

    1978-01-01

    How well the computer site manager avoids future dangers and takes advantage of future opportunities depends to a considerable degree on how much anticipatory information he has available. People who rise in management are expected with each successive promotion to concern themselves with events further in the future. It is the function of technology projection to increase this stock of information about possible future developments in order to put planning and decision making on a more rational basis. Past efforts at computer technology projections have an accuracy that declines exponentially with time. Thus, precisely defined technology projections beyond about three years should be used with considerable caution. This paper reviews both subjective and objective methods of technology projection and gives examples of each. For an integrated view of future prospects in computer technology, a framework for technology projection is proposed.

  20. Computer Architecture Performance Evaluation Methods

    CERN Document Server

    Eeckhout, Lieven

    2010-01-01

    Performance evaluation is at the foundation of computer architecture research and development. Contemporary microprocessors are so complex that architects cannot design systems based on intuition and simple models only. Adequate performance evaluation methods are absolutely crucial to steer the research and development process in the right direction. However, rigorous performance evaluation is non-trivial as there are multiple aspects to performanceevaluation, such as picking workloads, selecting an appropriate modeling or simulation approach, running the model and interpreting the results usi

  1. Computational neuroscience for advancing artificial intelligence

    Directory of Open Access Journals (Sweden)

    Fernando P. Ponce

    2011-07-01

    Full Text Available resumen del libro de Alonso, E. y Mondragón, E. (2011. Hershey, NY: Medical Information Science Reference. La neurociencia como disciplinapersigue el entendimiento del cerebro y su relación con el funcionamiento de la mente a través del análisis de la comprensión de la interacción de diversos procesos físicos, químicos y biológicos (Bassett & Gazzaniga, 2011. Por otra parte, numerosas disciplinasprogresivamente han realizado significativas contribuciones en esta empresa tales como la matemática, la psicología o la filosofía, entre otras. Producto de este esfuerzo, es que junto con la neurociencia tradicional han aparecido disciplinas complementarias como la neurociencia cognitiva, la neuropsicología o la neurocienciacomputacional (Bengio, 2007; Dayan & Abbott, 2005. En el contexto de la neurociencia computacional como disciplina complementaria a laneurociencia tradicional. Alonso y Mondragón (2011 editan el libroComputacional Neuroscience for Advancing Artificial Intelligence: Models, Methods and Applications.

  2. ADVANCES AT A GLANCE IN PARALLEL COMPUTING

    Directory of Open Access Journals (Sweden)

    RAJKUMAR SHARMA

    2014-07-01

    Full Text Available In the history of computational world, sequential uni-processor computers have been exploited for years to solve scientific and business problems. To satisfy the demand of compute & data hungry applications, it was observed that better response time can be achieved only through parallelism. Large computational problems were partitioned and solved by using multiple CPUs in parallel. Computing performance was further improved by adopting multi-core architecture which provides hardware parallelism through use of multiple cores. Efficient resource utilization of a parallel computing environment by using software and hardware parallelism is a major research challenge. The present hardware technologies provide freedom to algorithm developers for control & management of resources through software codes, such as threads-to-cores mapping in recent multi-core processors. In this paper, a survey is presented since beginning of parallel computing up to the use of present state-of-art multi-core processors.

  3. Computational intelligence for big data analysis frontier advances and applications

    CERN Document Server

    Dehuri, Satchidananda; Sanyal, Sugata

    2015-01-01

    The work presented in this book is a combination of theoretical advancements of big data analysis, cloud computing, and their potential applications in scientific computing. The theoretical advancements are supported with illustrative examples and its applications in handling real life problems. The applications are mostly undertaken from real life situations. The book discusses major issues pertaining to big data analysis using computational intelligence techniques and some issues of cloud computing. An elaborate bibliography is provided at the end of each chapter. The material in this book includes concepts, figures, graphs, and tables to guide researchers in the area of big data analysis and cloud computing.

  4. Advanced Technologies, Embedded and Multimedia for Human-Centric Computing

    CERN Document Server

    Chao, Han-Chieh; Deng, Der-Jiunn; Park, James; HumanCom and EMC 2013

    2014-01-01

    The theme of HumanCom and EMC are focused on the various aspects of human-centric computing for advances in computer science and its applications, embedded and multimedia computing and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. And the theme of EMC (Advanced in Embedded and Multimedia Computing) is focused on the various aspects of embedded system, smart grid, cloud and multimedia computing, and it provides an opportunity for academic, industry professionals to discuss the latest issues and progress in the area of embedded and multimedia computing. Therefore this book will be include the various theories and practical applications in human-centric computing and embedded and multimedia computing.

  5. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  6. Second International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Konar, Amit; Chakraborty, Aruna

    2014-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two-volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 148 scholarly papers, which have been accepted for presentation from over 640 submissions in the second International Conference on Advanced Computing, Networking and Informatics, 2014, held in Kolkata, India during June 24-26, 2014. The first volume includes innovative computing techniques and relevant research results in informatics with selective applications in pattern recognition, signal/image process...

  7. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  8. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  9. Advances in computing, and their impact on scientific computing.

    Science.gov (United States)

    Giles, Mike

    2002-01-01

    This paper begins by discussing the developments and trends in computer hardware, starting with the basic components (microprocessors, memory, disks, system interconnect, networking and visualization) before looking at complete systems (death of vector supercomputing, slow demise of large shared-memory systems, rapid growth in very large clusters of PCs). It then considers the software side, the relative maturity of shared-memory (OpenMP) and distributed-memory (MPI) programming environments, and new developments in 'grid computing'. Finally, it touches on the increasing importance of software packages in scientific computing, and the increased importance and difficulty of introducing good software engineering practices into very large academic software development projects. PMID:12539947

  10. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  11. Preface: Special issue: ten years of advances in computer entertainment

    NARCIS (Netherlands)

    Katayose, Haruhiro; Reidsma, Dennis; Rauterberg, M

    2014-01-01

    This special issue celebrates the 10th edition of the International Conference on Advances in Computer Entertainment (ACE) by collecting six selected and revised papers from among this year’s accepted contributions.

  12. Fast computation of the characteristics method on vector computers

    Energy Technology Data Exchange (ETDEWEB)

    Kugo, Teruhiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  13. 3rd International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Chaki, Nabendu

    2016-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 132 scholarly articles, which have been accepted for presentation from over 550 submissions in the Third International Conference on Advanced Computing, Networking and Informatics, 2015, held in Bhubaneswar, India during June 23–25, 2015.

  14. Advances in Computer, Communication, Control and Automation

    CERN Document Server

    011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume  topics covered include signal and Image processing, speech and audio Processing, video processing and analysis, artificial intelligence, computing and intelligent systems, machine learning, sensor and neural networks, knowledge discovery and data mining, fuzzy mathematics and Applications, knowledge-based systems, hybrid systems modeling and design, risk analysis and management, system modeling and simulation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  15. Method and Tools for Development of Advanced Instructional Systems

    NARCIS (Netherlands)

    Arend, J. van der; Riemersma, J.B.J.

    1994-01-01

    The application of advanced instructional systems (AISs), like computer-based training systems, intelligent tutoring systems and training simulators, is widely spread within the Royal Netherlands Army. As a consequence there is a growing interest in methods and tools to develop effective and efficie

  16. Advanced reliability methods - A review

    Science.gov (United States)

    Forsyth, David S.

    2016-02-01

    There are a number of challenges to the current practices for Probability of Detection (POD) assessment. Some Nondestructive Testing (NDT) methods, especially those that are image-based, may not provide a simple relationship between a scalar NDT response and a damage size. Some damage types are not easily characterized by a single scalar metric. Other sensing paradigms, such as structural health monitoring, could theoretically replace NDT but require a POD estimate. And the cost of performing large empirical studies to estimate POD can be prohibitive. The response of the research community has been to develop new methods that can be used to generate the same information, POD, in a form that can be used by engineering designers. This paper will highlight approaches to image-based data and complex defects, Model Assisted POD estimation, and Bayesian methods for combining information. This paper will also review the relationship of the POD estimate, confidence bounds, tolerance bounds, and risk assessment.

  17. Advanced Computing Tools and Models for Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  18. Advances in Computing and Information Technology : Proceedings of the Second International Conference on Advances in Computing and Information Technology

    CERN Document Server

    Nagamalai, Dhinaharan; Chaki, Nabendu

    2013-01-01

    The international conference on Advances in Computing and Information technology (ACITY 2012) provides an excellent international forum for both academics and professionals for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Second International Conference on Advances in Computing and Information technology (ACITY 2012), held in Chennai, India, during July 13-15, 2012, covered a number of topics in all major fields of Computer Science and Information Technology including: networking and communications, network security and applications, web and internet computing, ubiquitous computing, algorithms, bioinformatics, digital image processing and pattern recognition, artificial intelligence, soft computing and applications. Upon a strength review process, a number of high-quality, presenting not only innovative ideas but also a founded evaluation and a strong argumentation of the same, were selected and collected in the present proceedings, ...

  19. Extending the horizons advances in computing, optimization, and decision technologies

    CERN Document Server

    Joseph, Anito; Mehrotra, Anuj; Trick, Michael

    2007-01-01

    Computer Science and Operations Research continue to have a synergistic relationship and this book represents the results of cross-fertilization between OR/MS and CS/AI. It is this interface of OR/CS that makes possible advances that could not have been achieved in isolation. Taken collectively, these articles are indicative of the state-of-the-art in the interface between OR/MS and CS/AI and of the high caliber of research being conducted by members of the INFORMS Computing Society. EXTENDING THE HORIZONS: Advances in Computing, Optimization, and Decision Technologies is a volume that presents the latest, leading research in the design and analysis of algorithms, computational optimization, heuristic search and learning, modeling languages, parallel and distributed computing, simulation, computational logic and visualization. This volume also emphasizes a variety of novel applications in the interface of CS, AI, and OR/MS.

  20. Advances in Computer Science and Education

    CERN Document Server

    Huang, Xiong

    2012-01-01

    CSE2011 is an integrated conference concentration its focus on computer science and education. In the proceeding, you can learn much more knowledge about computer science and education of researchers from all around the world. The main role of the proceeding is to be used as an exchange pillar for researchers who are working in the mentioned fields. In order to meet the high quality of Springer, AISC series, the organization committee has made their efforts to do the following things. Firstly, poor quality paper has been refused after reviewing course by anonymous referee experts. Secondly, periodically review meetings have been held around the reviewers about five times for exchanging reviewing suggestions. Finally, the conference organizers had several preliminary sessions before the conference. Through efforts of different people and departments, the conference will be successful and fruitful

  1. Advanced Fine Particulate Characterization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Steven Benson; Lingbu Kong; Alexander Azenkeng; Jason Laumb; Robert Jensen; Edwin Olson; Jill MacKenzie; A.M. Rokanuzzaman

    2007-01-31

    The characterization and control of emissions from combustion sources are of significant importance in improving local and regional air quality. Such emissions include fine particulate matter, organic carbon compounds, and NO{sub x} and SO{sub 2} gases, along with mercury and other toxic metals. This project involved four activities including Further Development of Analytical Techniques for PM{sub 10} and PM{sub 2.5} Characterization and Source Apportionment and Management, Organic Carbonaceous Particulate and Metal Speciation for Source Apportionment Studies, Quantum Modeling, and High-Potassium Carbon Production with Biomass-Coal Blending. The key accomplishments included the development of improved automated methods to characterize the inorganic and organic components particulate matter. The methods involved the use of scanning electron microscopy and x-ray microanalysis for the inorganic fraction and a combination of extractive methods combined with near-edge x-ray absorption fine structure to characterize the organic fraction. These methods have direction application for source apportionment studies of PM because they provide detailed inorganic analysis along with total organic and elemental carbon (OC/EC) quantification. Quantum modeling using density functional theory (DFT) calculations was used to further elucidate a recently developed mechanistic model for mercury speciation in coal combustion systems and interactions on activated carbon. Reaction energies, enthalpies, free energies and binding energies of Hg species to the prototype molecules were derived from the data obtained in these calculations. Bimolecular rate constants for the various elementary steps in the mechanism have been estimated using the hard-sphere collision theory approximation, and the results seem to indicate that extremely fast kinetics could be involved in these surface reactions. Activated carbon was produced from a blend of lignite coal from the Center Mine in North Dakota and

  2. Recent advances in computational mechanics of the human knee joint.

    Science.gov (United States)

    Kazemi, M; Dabiri, Y; Li, L P

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling.

  3. Recent Advances in Computational Mechanics of the Human Knee Joint

    Directory of Open Access Journals (Sweden)

    M. Kazemi

    2013-01-01

    Full Text Available Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling.

  4. Advances in computational fluid dynamics solvers for modern computing environments

    Science.gov (United States)

    Hertenstein, Daniel; Humphrey, John R.; Paolini, Aaron L.; Kelmelis, Eric J.

    2013-05-01

    EM Photonics has been investigating the application of massively multicore processors to a key problem area: Computational Fluid Dynamics (CFD). While the capabilities of CFD solvers have continually increased and improved to support features such as moving bodies and adjoint-based mesh adaptation, the software architecture has often lagged behind. This has led to poor scaling as core counts reach the tens of thousands. In the modern High Performance Computing (HPC) world, clusters with hundreds of thousands of cores are becoming the standard. In addition, accelerator devices such as NVIDIA GPUs and Intel Xeon Phi are being installed in many new systems. It is important for CFD solvers to take advantage of the new hardware as the computations involved are well suited for the massively multicore architecture. In our work, we demonstrate that new features in NVIDIA GPUs are able to empower existing CFD solvers by example using AVUS, a CFD solver developed by the Air Force Research Labratory (AFRL) and the Volcanic Ash Advisory Center (VAAC). The effort has resulted in increased performance and scalability without sacrificing accuracy. There are many well-known codes in the CFD space that can benefit from this work, such as FUN3D, OVERFLOW, and TetrUSS. Such codes are widely used in the commercial, government, and defense sectors.

  5. Proceedings: Workshop on Advanced Mathematics and Computer Science for Power Systems Analysis

    Energy Technology Data Exchange (ETDEWEB)

    None

    1991-08-01

    EPRI's Office of Exploratory Research sponsors a series of workshops that explore how to apply recent advances in mathematics and computer science to the problems of the electric utility industry. In this workshop, participants identified research objectives that may significantly improve the mathematical methods and computer architecture currently used for power system analysis.

  6. Advances in computational studies of energy materials.

    Science.gov (United States)

    Catlow, C R A; Guo, Z X; Miskufova, M; Shevlin, S A; Smith, A G H; Sokol, A A; Walsh, A; Wilson, D J; Woodley, S M

    2010-07-28

    We review recent developments and applications of computational modelling techniques in the field of materials for energy technologies including hydrogen production and storage, energy storage and conversion, and light absorption and emission. In addition, we present new work on an Sn2TiO4 photocatalyst containing an Sn(II) lone pair, new interatomic potential models for SrTiO3 and GaN, an exploration of defects in the kesterite/stannite-structured solar cell absorber Cu2ZnSnS4, and report details of the incorporation of hydrogen into Ag2O and Cu2O. Special attention is paid to the modelling of nanostructured systems, including ceria (CeO2, mixed Ce(x)O(y) and Ce2O3) and group 13 sesquioxides. We consider applications based on both interatomic potential and electronic structure methodologies; and we illustrate the increasingly quantitative and predictive nature of modelling in this field. PMID:20566517

  7. Computational simulation methods for composite fracture mechanics

    Science.gov (United States)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  8. Advances in FDTD computational electrodynamics photonics and nanotechnology

    CERN Document Server

    Oskooi, Ardavan; Johnson, Steven G

    2013-01-01

    Advances in photonics and nanotechnology have the potential to revolutionize humanity s ability to communicate and compute. To pursue these advances, it is mandatory to understand and properly model interactions of light with materials such as silicon and gold at the nanoscale, i.e., the span of a few tens of atoms laid side by side. These interactions are governed by the fundamental Maxwell s equations of classical electrodynamics, supplemented by quantum electrodynamics. This book presents the current state-of-the-art in formulating and implementing computational models of these interactions. Maxwell s equations are solved using the finite-difference time-domain (FDTD) technique, pioneered by the senior editor, whose prior Artech books in this area are among the top ten most-cited in the history of engineering. You discover the most important advances in all areas of FDTD and PSTD computational modeling of electromagnetic wave interactions. This cutting-edge resource helps you understand the latest develo...

  9. Advanced finite element method in structural engineering

    CERN Document Server

    Long, Yu-Qiu; Long, Zhi-Fei

    2009-01-01

    This book systematically introduces the research work on the Finite Element Method completed over the past 25 years. Original theoretical achievements and their applications in the fields of structural engineering and computational mechanics are discussed.

  10. Reliability of an Interactive Computer Program for Advance Care Planning

    OpenAIRE

    Schubart, Jane R.; Levi, Benjamin H.; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J.

    2012-01-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demon...

  11. Recent advances in boundary element methods

    CERN Document Server

    Manolis, GD

    2009-01-01

    Addresses the needs of the computational mechanics research community in terms of information on boundary integral equation-based methods and techniques applied to a variety of fields. This book collects both original and review articles on contemporary Boundary Element Methods (BEM) as well as on the Mesh Reduction Methods (MRM).

  12. 9th International Conference on Advanced Computing & Communication Technologies

    CERN Document Server

    Mandal, Jyotsna; Auluck, Nitin; Nagarajaram, H

    2016-01-01

    This book highlights a collection of high-quality peer-reviewed research papers presented at the Ninth International Conference on Advanced Computing & Communication Technologies (ICACCT-2015) held at Asia Pacific Institute of Information Technology, Panipat, India during 27–29 November 2015. The book discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academia and industry present their original work and exchange ideas, information, techniques and applications in the field of Advanced Computing and Communication Technology.

  13. Advances in structure research by diffraction methods

    CERN Document Server

    Brill, R

    1970-01-01

    Advances in Structure Research by Diffraction Methods reviews advances in the use of diffraction methods in structure research. Topics covered include the dynamical theory of X-ray diffraction, with emphasis on Ewald waves in theory and experiment; dynamical theory of electron diffraction; small angle scattering; and molecular packing. This book is comprised of four chapters and begins with an overview of the dynamical theory of X-ray diffraction, especially in terms of how it explains all the absorption and propagation properties of X-rays at the Bragg setting in a perfect crystal. The next

  14. Computational methods for probability of instability calculations

    Science.gov (United States)

    Wu, Y.-T.; Burnside, O. H.

    1990-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.

  15. NATO Advanced Research Workshop on Vectorization of Advanced Methods for Molecular Electronic Structure

    CERN Document Server

    1984-01-01

    That there have been remarkable advances in the field of molecular electronic structure during the last decade is clear not only to those working in the field but also to anyone else who has used quantum chemical results to guide their own investiga­ tions. The progress in calculating the electronic structures of molecules has occurred through the truly ingenious theoretical and methodological developments that have made computationally tractable the underlying physics of electron distributions around a collection of nuclei. At the same time there has been consider­ able benefit from the great advances in computer technology. The growing sophistication, declining costs and increasing accessibi­ lity of computers have let theorists apply their methods to prob­ lems in virtually all areas of molecular science. Consequently, each year witnesses calculations on larger molecules than in the year before and calculations with greater accuracy and more com­ plete information on molecular properties. We can surel...

  16. Advanced analysis methods in particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, Pushpalatha C.; /Fermilab

    2010-10-01

    Each generation of high energy physics experiments is grander in scale than the previous - more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.

  17. Computer-Assisted Foreign Language Teaching and Learning: Technological Advances

    Science.gov (United States)

    Zou, Bin; Xing, Minjie; Wang, Yuping; Sun, Mingyu; Xiang, Catherine H.

    2013-01-01

    Computer-Assisted Foreign Language Teaching and Learning: Technological Advances highlights new research and an original framework that brings together foreign language teaching, experiments and testing practices that utilize the most recent and widely used e-learning resources. This comprehensive collection of research will offer linguistic…

  18. Innovations and Advances in Computer, Information, Systems Sciences, and Engineering

    CERN Document Server

    Sobh, Tarek

    2013-01-01

    Innovations and Advances in Computer, Information, Systems Sciences, and Engineering includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2011). The contents of this book are a set of rigorously reviewed, world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology and Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.

  19. Advances in computers dependable and secure systems engineering

    CERN Document Server

    Hurson, Ali

    2012-01-01

    Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of sugnificant, lasting value in this rapidly expanding field. In-depth surveys and tutorials on new computer technologyWell-known authors and researchers in the fieldExtensive bibliographies with m

  20. TerraFERMA: Harnessing Advanced Computational Libraries in Earth Science

    Science.gov (United States)

    Wilson, C. R.; Spiegelman, M.; van Keken, P.

    2012-12-01

    Many important problems in Earth sciences can be described by non-linear coupled systems of partial differential equations. These "multi-physics" problems include thermo-chemical convection in Earth and planetary interiors, interactions of fluids and magmas with the Earth's mantle and crust and coupled flow of water and ice. These problems are of interest to a large community of researchers but are complicated to model and understand. Much of this complexity stems from the nature of multi-physics where small changes in the coupling between variables or constitutive relations can lead to radical changes in behavior, which in turn affect critical computational choices such as discretizations, solvers and preconditioners. To make progress in understanding such coupled systems requires a computational framework where multi-physics problems can be described at a high-level while maintaining the flexibility to easily modify the solution algorithm. Fortunately, recent advances in computational science provide a basis for implementing such a framework. Here we present the Transparent Finite Element Rapid Model Assembler (TerraFERMA), which leverages several advanced open-source libraries for core functionality. FEniCS (fenicsproject.org) provides a high level language for describing the weak forms of coupled systems of equations, and an automatic code generator that produces finite element assembly code. PETSc (www.mcs.anl.gov/petsc) provides a wide range of scalable linear and non-linear solvers that can be composed into effective multi-physics preconditioners. SPuD (amcg.ese.ic.ac.uk/Spud) is an application neutral options system that provides both human and machine-readable interfaces based on a single xml schema. Our software integrates these libraries and provides the user with a framework for exploring multi-physics problems. A single options file fully describes the problem, including all equations, coefficients and solver options. Custom compiled applications are

  1. Advanced computational tools for 3-D seismic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Glover, C.W.; Protopopescu, V.A. [Oak Ridge National Lab., TN (United States)] [and others

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  2. [Activities of Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  3. Mathematics for natural scientists II advanced methods

    CERN Document Server

    Kantorovich, Lev

    2016-01-01

    This book covers the advanced mathematical techniques useful for physics and engineering students, presented in a form accessible to physics students, avoiding precise mathematical jargon and laborious proofs. Instead, all proofs are given in a simplified form that is clear and convincing for a physicist. Examples, where appropriate, are given from physics contexts. Both solved and unsolved problems are provided in each chapter. Mathematics for Natural Scientists II: Advanced Methods is the second of two volumes. It follows the first volume on Fundamentals and Basics.

  4. 2014 National Workshop on Advances in Communication and Computing

    CERN Document Server

    Prasanna, S; Sarma, Kandarpa; Saikia, Navajit

    2015-01-01

    The present volume is a compilation of research work in computation, communication, vision sciences, device design, fabrication, upcoming materials and related process design, etc. It is derived out of selected manuscripts submitted to the 2014 National Workshop on Advances in Communication and Computing (WACC 2014), Assam Engineering College, Guwahati, Assam, India which is emerging out to be a premier platform for discussion and dissemination of knowhow in this part of the world. The papers included in the volume are indicative of the recent thrust in computation, communications and emerging technologies. Certain recent advances in ZnO nanostructures for alternate energy generation provide emerging insights into an area that has promises for the energy sector including conservation and green technology. Similarly, scholarly contributions have focused on malware detection and related issues. Several contributions have focused on biomedical aspects including contributions related to cancer detection using act...

  5. Foreword: Advanced Science Letters (ASL), Special Issue on Computational Astrophysics

    CERN Document Server

    ,

    2009-01-01

    Computational astrophysics has undergone unprecedented development over the last decade, becoming a field of its own. The challenge ahead of us will involve increasingly complex multi-scale simulations. These will bridge the gap between areas of astrophysics such as star and planet formation, or star formation and galaxy formation, that have evolved separately until today. A global knowledge of the physics and modeling techniques of astrophysical simulations is thus an important asset for the next generation of modelers. With the aim at fostering such a global approach, we present the Special Issue on Computational Astrophysics for the Advanced Science Letters (http://www.aspbs.com/science.htm). The Advanced Science Letters (ASL) is a new multi-disciplinary scientific journal which will cover extensively computational astrophysics and cosmology, and will act as a forum for the presentation and discussion of novel work attempting to connect different research areas. This Special Issue collects 9 reviews on 9 k...

  6. Computational methods for unsteady transonic flows

    Science.gov (United States)

    Edwards, John W.; Thomas, J. L.

    1987-01-01

    Computational methods for unsteady transonic flows are surveyed with emphasis on prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed, and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

  7. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hendrickson, Bruce [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wade, Doug [National Nuclear Security Administration (NNSA), Washington, DC (United States). Office of Advanced Simulation and Computing and Institutional Research and Development; Hoang, Thuc [National Nuclear Security Administration (NNSA), Washington, DC (United States). Computational Systems and Software Environment

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  8. Advances in Computational Fluid-Structure Interaction and Flow Simulation Conference

    CERN Document Server

    Takizawa, Kenji

    2016-01-01

    This contributed volume celebrates the work of Tayfun E. Tezduyar on the occasion of his 60th birthday. The articles it contains were born out of the Advances in Computational Fluid-Structure Interaction and Flow Simulation (AFSI 2014) conference, also dedicated to Prof. Tezduyar and held at Waseda University in Tokyo, Japan on March 19-21, 2014. The contributing authors represent a group of international experts in the field who discuss recent trends and new directions in computational fluid dynamics (CFD) and fluid-structure interaction (FSI). Organized into seven distinct parts arranged by thematic topics, the papers included cover basic methods and applications of CFD, flows with moving boundaries and interfaces, phase-field modeling, computer science and high-performance computing (HPC) aspects of flow simulation, mathematical methods, biomedical applications, and FSI. Researchers, practitioners, and advanced graduate students working on CFD, FSI, and related topics will find this collection to be a defi...

  9. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  10. Using Parallel Computing Methods in Business Processes

    OpenAIRE

    Machek, Ondrej; Hejda, Jan

    2012-01-01

    In computer science, engineers deal with the issue how to accelerate the execution of extensive tasks with parallel computing algorithms, which are executed on large network of cooperating processors.The business world forms large networks of business units, too, and in business management, managers often face similar problems. The aim of this paper is to consider the possibilities of using parallel computing methods in business networks. In the first part, weintroduce the issue and make some...

  11. Advanced electromagnetic methods for aerospace vehicles

    Science.gov (United States)

    Balanis, Constantine A.; El-Sharawy, El-Budawy; Hashemi-Yeganeh, Shahrokh; Aberle, James T.; Birtcher, Craig R.

    1991-01-01

    The Advanced Helicopter Electromagnetics is centered on issues that advance technology related to helicopter electromagnetics. Progress was made on three major topics: composite materials; precipitation static corona discharge; and antenna technology. In composite materials, the research has focused on the measurements of their electrical properties, and the modeling of material discontinuities and their effect on the radiation pattern of antennas mounted on or near material surfaces. The electrical properties were used to model antenna performance when mounted on composite materials. Since helicopter platforms include several antenna systems at VHF and UHF bands, measuring techniques are being explored that can be used to measure the properties at these bands. The effort on corona discharge and precipitation static was directed toward the development of a new two dimensional Voltage Finite Difference Time Domain computer program. Results indicate the feasibility of using potentials for simulating electromagnetic problems in the cases where potentials become primary sources. In antenna technology the focus was on Polarization Diverse Conformal Microstrip Antennas, Cavity Backed Slot Antennas, and Varactor Tuned Circular Patch Antennas. Numerical codes were developed for the analysis of two probe fed rectangular and circular microstrip patch antennas fed by resistive and reactive power divider networks.

  12. Bio-inspired computational techniques based on advanced condition monitoring

    Institute of Scientific and Technical Information of China (English)

    Su Liangcheng; He Shan; Li Xiaoli; Li Xinglin

    2011-01-01

    The application of bio-inspired computational techniques to the field of condition monitoring is addressed.First, the bio-inspired computational techniques are briefly addressed; the advantages and disadvantages of these computational methods are made clear. Then, the roles of condition monitoring in the predictive maintenance and failures prediction and the development trends of condition monitoring are discussed. Finally, a case study on the condition monitoring of grinding machine is described, which shows the application of bio-inspired computational technique to a practical condition monitoring system.

  13. The advanced computational testing and simulation toolkit (ACTS)

    Energy Technology Data Exchange (ETDEWEB)

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  14. Computational experiment approach to advanced secondary mathematics curriculum

    CERN Document Server

    Abramovich, Sergei

    2014-01-01

    This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...

  15. Advances in iterative methods for nonlinear equations

    CERN Document Server

    Busquier, Sonia

    2016-01-01

    This book focuses on the approximation of nonlinear equations using iterative methods. Nine contributions are presented on the construction and analysis of these methods, the coverage encompassing convergence, efficiency, robustness, dynamics, and applications. Many problems are stated in the form of nonlinear equations, using mathematical modeling. In particular, a wide range of problems in Applied Mathematics and in Engineering can be solved by finding the solutions to these equations. The book reveals the importance of studying convergence aspects in iterative methods and shows that selection of the most efficient and robust iterative method for a given problem is crucial to guaranteeing a good approximation. A number of sample criteria for selecting the optimal method are presented, including those regarding the order of convergence, the computational cost, and the stability, including the dynamics. This book will appeal to researchers whose field of interest is related to nonlinear problems and equations...

  16. Computational methods for structural load and resistance modeling

    Science.gov (United States)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  17. Computational Chemistry Using Modern Electronic Structure Methods

    Science.gov (United States)

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  18. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  19. Computational methods for global/local analysis

    Science.gov (United States)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  20. Some methods of computational geometry applied to computer graphics

    NARCIS (Netherlands)

    Overmars, M.H.; Edelsbrunner, H.; Seidel, R.

    1984-01-01

    Abstract Windowing a two-dimensional picture means to determine those line segments of the picture that are visible through an axis-parallel window. A study of some algorithmic problems involved in windowing a picture is offered. Some methods from computational geometry are exploited to store the pi

  1. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  2. Advanced applications of boundary-integral equation methods

    International Nuclear Information System (INIS)

    Numerical analysis has become the basic tool for both design and research problems in solid mechanics. The need for accuracy and detail, plus the availablity of the high speed computer has led to the development of many new modeling methods ranging from general purpose structural analysis finite element programs to special purpose research programs. The boundary-integral equation (BIE) method is based on classical mathematical techniques but is finding new life as a basic stress analysis tool for engineering applications. The paper summarizes some advanced elastic applications of fracture mechanics and three-dimensional stress analysis, while referencing some of the much broader developmental effort. Future emphasis is needed to exploit the BIE method in conjunction with other techniques such as the finite element method through the creation of hybrid stress analysis methods. (Auth.)

  3. Advances in Packaging Methods, Processes and Systems

    Directory of Open Access Journals (Sweden)

    Nitaigour Premchand Mahalik

    2014-10-01

    Full Text Available The food processing and packaging industry is becoming a multi-trillion dollar global business. The reason is that the recent increase in incomes in traditionally less economically developed countries has led to a rise in standards of living that includes a significantly higher consumption of packaged foods. As a result, food safety guidelines have been more stringent than ever. At the same time, the number of research and educational institutions—that is, the number of potential researchers and stakeholders—has increased in the recent past. This paper reviews recent developments in food processing and packaging (FPP, keeping in view the aforementioned advancements and bearing in mind that FPP is an interdisciplinary area in that materials, safety, systems, regulation, and supply chains play vital roles. In particular, the review covers processing and packaging principles, standards, interfaces, techniques, methods, and state-of-the-art technologies that are currently in use or in development. Recent advances such as smart packaging, non-destructive inspection methods, printing techniques, application of robotics and machineries, automation architecture, software systems and interfaces are reviewed.

  4. Advances in soft computing, intelligent robotics and control

    CERN Document Server

    Fullér, Robert

    2014-01-01

    Soft computing, intelligent robotics and control are in the core interest of contemporary engineering. Essential characteristics of soft computing methods are the ability to handle vague information, to apply human-like reasoning, their learning capability, and ease of application. Soft computing techniques are widely applied in the control of dynamic systems, including mobile robots. The present volume is a collection of 20 chapters written by respectable experts of the fields, addressing various theoretical and practical aspects in soft computing, intelligent robotics and control. The first part of the book concerns with issues of intelligent robotics, including robust xed point transformation design, experimental verification of the input-output feedback linearization of differentially driven mobile robot and applying kinematic synthesis to micro electro-mechanical systems design. The second part of the book is devoted to fundamental aspects of soft computing. This includes practical aspects of fuzzy rule ...

  5. Advanced intelligent computational technologies and decision support systems

    CERN Document Server

    Kountchev, Roumen

    2014-01-01

    This book offers a state of the art collection covering themes related to Advanced Intelligent Computational Technologies and Decision Support Systems which can be applied to fields like healthcare assisting the humans in solving problems. The book brings forward a wealth of ideas, algorithms and case studies in themes like: intelligent predictive diagnosis; intelligent analyzing of medical images; new format for coding of single and sequences of medical images; Medical Decision Support Systems; diagnosis of Down’s syndrome; computational perspectives for electronic fetal monitoring; efficient compression of CT Images; adaptive interpolation and halftoning for medical images; applications of artificial neural networks for real-life problems solving; present and perspectives for Electronic Healthcare Record Systems; adaptive approaches for noise reduction in sequences of CT images etc.

  6. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  7. Computational Methods for Rough Classification and Discovery.

    Science.gov (United States)

    Bell, D. A.; Guan, J. W.

    1998-01-01

    Rough set theory is a new mathematical tool to deal with vagueness and uncertainty. Computational methods are presented for using rough sets to identify classes in datasets, finding dependencies in relations, and discovering rules which are hidden in databases. The methods are illustrated with a running example from a database of car test results.…

  8. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  9. Advanced reactor physics methods for heterogeneous reactor cores

    Science.gov (United States)

    Thompson, Steven A.

    To maintain the economic viability of nuclear power the industry has begun to emphasize maximizing the efficiency and output of existing nuclear power plants by using longer fuel cycles, stretch power uprates, shorter outage lengths, mixed-oxide (MOX) fuel and more aggressive operating strategies. In order to accommodate these changes, while still satisfying the peaking factor and power envelope requirements necessary to maintain safe operation, more complexity in commercial core designs have been implemented, such as an increase in the number of sub-batches and an increase in the use of both discrete and integral burnable poisons. A consequence of the increased complexity of core designs, as well as the use of MOX fuel, is an increase in the neutronic heterogeneity of the core. Such heterogeneous cores introduce challenges for the current methods that are used for reactor analysis. New methods must be developed to address these deficiencies while still maintaining the computational efficiency of existing reactor analysis methods. In this thesis, advanced core design methodologies are developed to be able to adequately analyze the highly heterogeneous core designs which are currently in use in commercial power reactors. These methodological improvements are being pursued with the goal of not sacrificing the computational efficiency which core designers require. More specifically, the PSU nodal code NEM is being updated to include an SP3 solution option, an advanced transverse leakage option, and a semi-analytical NEM solution option.

  10. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  11. Computer Vision Method in Human Motion Detection

    Institute of Scientific and Technical Information of China (English)

    FU Li; FANG Shuai; XU Xin-he

    2007-01-01

    Human motion detection based on computer vision is a frontier research topic and is causing an increasing attention in the field of computer vision research. The wavelet transform is used to sharpen the ambiguous edges in human motion image. The shadow's effect to the image processing is also removed. The edge extraction can be successfully realized.This is an effective method for the research of human motion analysis system.

  12. The ACP (Advanced Computer Program) multiprocessor system at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Case, G.; Cook, A.; Fischler, M.; Gaines, I.; Hance, R.; Husby, D.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere.

  13. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  14. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The...... method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...

  15. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  16. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    Energy Technology Data Exchange (ETDEWEB)

    Reed, Daniel [University of Iowa; Berzins, Martin [University of Utah; Pennington, Robert; Sarkar, Vivek [Rice University; Taylor, Valerie [Texas A& M University

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  17. International conference on Advances in Intelligent Control and Innovative Computing

    CERN Document Server

    Castillo, Oscar; Huang, Xu; Intelligent Control and Innovative Computing

    2012-01-01

    In the lightning-fast world of intelligent control and cutting-edge computing, it is vitally important to stay abreast of developments that seem to follow each other without pause. This publication features the very latest and some of the very best current research in the field, with 32 revised and extended research articles written by prominent researchers in the field. Culled from contributions to the key 2011 conference Advances in Intelligent Control and Innovative Computing, held in Hong Kong, the articles deal with a wealth of relevant topics, from the most recent work in artificial intelligence and decision-supporting systems, to automated planning, modelling and simulation, signal processing, and industrial applications. Not only does this work communicate the current state of the art in intelligent control and innovative computing, it is also an illuminating guide to up-to-date topics for researchers and graduate students in the field. The quality of the contents is absolutely assured by the high pro...

  18. A method to compute periodic sums

    CERN Document Server

    Gumerov, Nail A

    2013-01-01

    In a number of problems in computational physics, a finite sum of kernel functions centered at $N$ particle locations located in a box in three dimensions must be extended by imposing periodic boundary conditions on box boundaries. Even though the finite sum can be efficiently computed via fast summation algorithms, such as the fast multipole method (FMM), the periodized extension is usually treated via a different algorithm, Ewald summation, accelerated via the fast Fourier transform (FFT). A different approach to compute this periodized sum just using a blackbox finite fast summation algorithm is presented in this paper. The method splits the periodized sum in to two parts. The first, comprising the contribution of all points outside a large sphere enclosing the box, and some of its neighbors, is approximated inside the box by a collection of kernel functions ("sources") placed on the surface of the sphere or using an expansion in terms of spectrally convergent local basis functions. The second part, compri...

  19. Current methods and advances in bone densitometry

    Energy Technology Data Exchange (ETDEWEB)

    Guglielmi, G. [Dept. of Radiology, Scientific Inst. ``CSS``, San Giovanni Rotondo (Italy); Glueer, C.C. [Dept. of Radiology, Musculoskeletal Section and Osteoporosis Research Group, Univ. of California, San Francisco, CA (United States); Majumdar, S. [Dept. of Radiology, Musculoskeletal Section and Osteoporosis Research Group, Univ. of California, San Francisco, CA (United States); Blunt, B.A. [Dept. of Radiology, Musculoskeletal Section and Osteoporosis Research Group, Univ. of California, San Francisco, CA (United States); Genant, H.K. [Dept. of Radiology, Musculoskeletal Section and Osteoporosis Research Group, Univ. of California, San Francisco, CA (United States)

    1995-08-01

    Bone mass is the primary, although not the only, determinant of fracture. Over the past few years a number of noninvasive techniques have been developed to more sensitively quantitate bone mass. These include single and dual photon absorptiometry (SPA and DPA), single and dual X-ray absorptiometry (SXA and DXA) and quantitative computed tomography (QCT). While differing in anatomic sites measured and in their estimates of precision, accuracy, and fracture discrimination, all of these methods provide clinically useful measurements of skeletal status. It is the intent of this review to discuss the pros and cons of these techniques and to present the new applications of ultrasound (US) and magnetic resonance (MRI) in the detection and management of osteoporosis. (orig.)

  20. Current methods and advances in bone densitometry

    Science.gov (United States)

    Guglielmi, G.; Gluer, C. C.; Majumdar, S.; Blunt, B. A.; Genant, H. K.

    1995-01-01

    Bone mass is the primary, although not the only, determinant of fracture. Over the past few years a number of noninvasive techniques have been developed to more sensitively quantitate bone mass. These include single and dual photon absorptiometry (SPA and DPA), single and dual X-ray absorptiometry (SXA and DXA) and quantitative computed tomography (QCT). While differing in anatomic sites measured and in their estimates of precision, accuracy, and fracture discrimination, all of these methods provide clinically useful measurements of skeletal status. It is the intent of this review to discuss the pros and cons of these techniques and to present the new applications of ultrasound (US) and magnetic resonance (MRI) in the detection and management of osteoporosis.

  1. Computational structural analysis and finite element methods

    CERN Document Server

    Kaveh, A

    2014-01-01

    Graph theory gained initial prominence in science and engineering through its strong links with matrix algebra and computer science. Moreover, the structure of the mathematics is well suited to that of engineering problems in analysis and design. The methods of analysis in this book employ matrix algebra, graph theory and meta-heuristic algorithms, which are ideally suited for modern computational mechanics. Efficient methods are presented that lead to highly sparse and banded structural matrices. The main features of the book include: application of graph theory for efficient analysis; extension of the force method to finite element analysis; application of meta-heuristic algorithms to ordering and decomposition (sparse matrix technology); efficient use of symmetry and regularity in the force method; and simultaneous analysis and design of structures.

  2. Computational methods for inlet airframe integration

    Science.gov (United States)

    Towne, Charles E.

    1988-01-01

    Fundamental equations encountered in computational fluid dynamics (CFD), and analyses used for internal flow are introduced. Irrotational flow; Euler equations; boundary layers; parabolized Navier-Stokes equations; and time averaged Navier-Stokes equations are treated. Assumptions made and solution methods are outlined, with examples. The overall status of CFD in propulsion is indicated.

  3. Computational Methods for Structural Mechanics and Dynamics

    Science.gov (United States)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  4. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  5. 7 CFR 27.92 - Method of payment; advance deposit.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Method of payment; advance deposit. 27.92 Section 27... Micronaire § 27.92 Method of payment; advance deposit. Any payment or advance deposit under this subpart...,” and may not be made in cash except in cases where the total payment or deposit does not exceed...

  6. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  7. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  8. The use of advanced computer simulation in structural design

    Energy Technology Data Exchange (ETDEWEB)

    Field, C.J.; Mole, A. [Arup, San Fransisco, CA (United States); Arkinstall, M. [Arup, Sydney (Australia)

    2005-07-01

    The benefits that can be gained from the application of advanced numerical simulation in building design were discussed. A review of current practices in structural engineering was presented along with an illustration of a range of international project case studies. Structural engineers use analytical methods to evaluate both static and dynamic loads. Structural design is prescribed by a range of building codes, depending on location, building type and loading, but often, buildings do not fit well within the codes, particularly if one wants to take advantage of new technologies and developments in design that are not covered by the code. Advanced simulation refers to the use of mathematical modeling to complex problems to allow a wider consideration of building types and conditions that can be designed reliably using standard practices. Advanced simulation is used to address virtual testing and prototyping, verifying innovative design ideas, forensic engineering, and design optimization. The benefits of advanced simulation include enhanced creativity, improved performance, cost savings, risk management, sustainable design solutions, and better communication. The following 5 case studies illustrated the value gained by using advanced simulation as an integral part of the design process: the earthquake resistant Maison Hermes in Tokyo; the seismic resistant braces known as the Unbonded Brace for use in the United States; a simulation of the existing Disney Museum to evaluate its capacity to resist earthquakes; simulation of the MIT Brain and Cognitive Science Project to evaluate the effect of different foundation types on the vibration entering the building; and, the Beijing Aquatic Center whose design was streamlined by optimized structural analysis. It was suggested that industry should encourage the transfer of technology from other professions and should try to collaborate towards a global building model to construct buildings in a more efficient manner. 7 refs

  9. Advanced applications of boundary-integral equation methods

    International Nuclear Information System (INIS)

    The BIE (boundary integral equation) method is based on the numerical solution of a set of integral constraint equations which couple boundary tractions (stresses) to boundary displacements. Thus the dimensionality of the problem is reduced by one; only boundary geometry and data are discretized. Stresses at any set of selected interior points are computed following the boundary solution without any further numerical approximations. Thus, the BIE method has inherently greater resolution capability for stress gradients than does the finite element method. Conversely, the BIE method is not efficient for problems involving significant inhomogeneity such as in multi-thin-layered materials, or in elastoplasticity. Some progress in applyiing the BIE method to the latter problem has been made but much more work remains. Further, the BIE method is only optional for problems with significant stress risers, and only when boundary stresses are most important. Interior stress calculations are expensive, per point, and can drive the solution costs up rapidly. The current report summarizes some of the advanced elastic applications of fracture mechanics and three-dimensional stress analysis, while referencing some of the much broader developmental effort. Future emphasis is needed to exploit the BIE method in conjunction with other techniques such as the finite element method through the creation of hybrid stress analysis methods

  10. An advanced method of heterogeneous reactor theory

    International Nuclear Information System (INIS)

    Recent approaches to heterogeneous reactor theory for numerical applications were presented in the course of 8 lectures given in JAERI. The limitations of initial theory known after the First Conference on Peacefull Uses of Atomic Energy held in Geneva in 1955 as Galanine-Feinberg heterogeneous theory:-matrix from of equations, -lack of consistent theory for heterogeneous parameters for reactor cell, -were overcome by a transformation of heterogeneous reactor equations to a difference form and by a development of a consistent theory for the characteristics of a reactor cell based on detailed space-energy calculations. General few group (G-number of groups) heterogeneous reactor equations in dipole approximation are formulated with the extension of two-dimensional problem to three-dimensions by finite Furie expansion of axial dependence of neutron fluxes. A transformation of initial matrix reactor equations to a difference form is presented. The methods for calculation of heterogeneous reactor cell characteristics giving the relation between vector-flux and vector-current on a cell boundary are based on a set of detailed space-energy neutron flux distribution calculations with zero current across cell boundary and G calculations with linearly independent currents across the cell boundary. The equations for reaction rate matrices are formulated. Specific methods were developed for description of neutron migration in axial and radial directions. The methods for resonance level's approach for numerous high-energy resonances. On the basis of these approaches the theory, methods and computer codes were developed for 3D space-time react or problems including simulation of slow processes with fuel burn-up, control rod movements, Xe poisoning and fast transients depending on prompt and delayed neutrons. As a result reactors with several thousands of channels having non-uniform axial structure can be feasibly treated. (author)

  11. Computational modeling, optimization and manufacturing simulation of advanced engineering materials

    CERN Document Server

    2016-01-01

    This volume presents recent research work focused in the development of adequate theoretical and numerical formulations to describe the behavior of advanced engineering materials.  Particular emphasis is devoted to applications in the fields of biological tissues, phase changing and porous materials, polymers and to micro/nano scale modeling. Sensitivity analysis, gradient and non-gradient based optimization procedures are involved in many of the chapters, aiming at the solution of constitutive inverse problems and parameter identification. All these relevant topics are exposed by experienced international and inter institutional research teams resulting in a high level compilation. The book is a valuable research reference for scientists, senior undergraduate and graduate students, as well as for engineers acting in the area of computational material modeling.

  12. Advanced continuous cultivation methods for systems microbiology.

    Science.gov (United States)

    Adamberg, Kaarel; Valgepea, Kaspar; Vilu, Raivo

    2015-09-01

    Increasing the throughput of systems biology-based experimental characterization of in silico-designed strains has great potential for accelerating the development of cell factories. For this, analysis of metabolism in the steady state is essential as only this enables the unequivocal definition of the physiological state of cells, which is needed for the complete description and in silico reconstruction of their phenotypes. In this review, we show that for a systems microbiology approach, high-resolution characterization of metabolism in the steady state--growth space analysis (GSA)--can be achieved by using advanced continuous cultivation methods termed changestats. In changestats, an environmental parameter is continuously changed at a constant rate within one experiment whilst maintaining cells in the physiological steady state similar to chemostats. This increases the resolution and throughput of GSA compared with chemostats, and, moreover, enables following of the dynamics of metabolism and detection of metabolic switch-points and optimal growth conditions. We also describe the concept, challenge and necessary criteria of the systematic analysis of steady-state metabolism. Finally, we propose that such systematic characterization of the steady-state growth space of cells using changestats has value not only for fundamental studies of metabolism, but also for systems biology-based metabolic engineering of cell factories.

  13. Advanced continuous cultivation methods for systems microbiology.

    Science.gov (United States)

    Adamberg, Kaarel; Valgepea, Kaspar; Vilu, Raivo

    2015-09-01

    Increasing the throughput of systems biology-based experimental characterization of in silico-designed strains has great potential for accelerating the development of cell factories. For this, analysis of metabolism in the steady state is essential as only this enables the unequivocal definition of the physiological state of cells, which is needed for the complete description and in silico reconstruction of their phenotypes. In this review, we show that for a systems microbiology approach, high-resolution characterization of metabolism in the steady state--growth space analysis (GSA)--can be achieved by using advanced continuous cultivation methods termed changestats. In changestats, an environmental parameter is continuously changed at a constant rate within one experiment whilst maintaining cells in the physiological steady state similar to chemostats. This increases the resolution and throughput of GSA compared with chemostats, and, moreover, enables following of the dynamics of metabolism and detection of metabolic switch-points and optimal growth conditions. We also describe the concept, challenge and necessary criteria of the systematic analysis of steady-state metabolism. Finally, we propose that such systematic characterization of the steady-state growth space of cells using changestats has value not only for fundamental studies of metabolism, but also for systems biology-based metabolic engineering of cell factories. PMID:26220303

  14. Reliability of an interactive computer program for advance care planning.

    Science.gov (United States)

    Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-06-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  15. Reliability of an Interactive Computer Program for Advance Care Planning

    Science.gov (United States)

    Levi, Benjamin H.; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-01-01

    Abstract Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83–0.95, and 0.86–0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  16. Optical design and characterization of an advanced computational imaging system

    Science.gov (United States)

    Shepard, R. Hamilton; Fernandez-Cull, Christy; Raskar, Ramesh; Shi, Boxin; Barsi, Christopher; Zhao, Hang

    2014-09-01

    We describe an advanced computational imaging system with an optical architecture that enables simultaneous and dynamic pupil-plane and image-plane coding accommodating several task-specific applications. We assess the optical requirement trades associated with custom and commercial-off-the-shelf (COTS) optics and converge on the development of two low-cost and robust COTS testbeds. The first is a coded-aperture programmable pixel imager employing a digital micromirror device (DMD) for image plane per-pixel oversampling and spatial super-resolution experiments. The second is a simultaneous pupil-encoded and time-encoded imager employing a DMD for pupil apodization or a deformable mirror for wavefront coding experiments. These two testbeds are built to leverage two MIT Lincoln Laboratory focal plane arrays - an orthogonal transfer CCD with non-uniform pixel sampling and on-chip dithering and a digital readout integrated circuit (DROIC) with advanced on-chip per-pixel processing capabilities. This paper discusses the derivation of optical component requirements, optical design metrics, and performance analyses for the two testbeds built.

  17. The application of advanced rotor (performance) methods for design calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bussel, G.J.W. van [Delft Univ. of Technology, Inst. for Wind Energy, Delft (Netherlands)

    1997-08-01

    The calculation of loads and performance of wind turbine rotors has been a topic for research over the last century. The principles for the calculation of loads on rotor blades with a given specific geometry, as well as the development of optimal shaped rotor blades have been published in the decades that significant aircraft development took place. Nowadays advanced computer codes are used for specific problems regarding modern aircraft, and application to wind turbine rotors has also been performed occasionally. The engineers designing rotor blades for wind turbines still use methods based upon global principles developed in the beginning of the century. The question what to expect in terms of the type of methods to be applied in a design environment for the near future is addressed here. (EG) 14 refs.

  18. Recovery Act: Advanced Interaction, Computation, and Visualization Tools for Sustainable Building Design

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, Donald P. [Cornell Univ., Ithaca, NY (United States); Hencey, Brandon M. [Cornell Univ., Ithaca, NY (United States)

    2013-08-20

    Current building energy simulation technology requires excessive labor, time and expertise to create building energy models, excessive computational time for accurate simulations and difficulties with the interpretation of the results. These deficiencies can be ameliorated using modern graphical user interfaces and algorithms which take advantage of modern computer architectures and display capabilities. To prove this hypothesis, we developed an experimental test bed for building energy simulation. This novel test bed environment offers an easy-to-use interactive graphical interface, provides access to innovative simulation modules that run at accelerated computational speeds, and presents new graphics visualization methods to interpret simulation results. Our system offers the promise of dramatic ease of use in comparison with currently available building energy simulation tools. Its modular structure makes it suitable for early stage building design, as a research platform for the investigation of new simulation methods, and as a tool for teaching concepts of sustainable design. Improvements in the accuracy and execution speed of many of the simulation modules are based on the modification of advanced computer graphics rendering algorithms. Significant performance improvements are demonstrated in several computationally expensive energy simulation modules. The incorporation of these modern graphical techniques should advance the state of the art in the domain of whole building energy analysis and building performance simulation, particularly at the conceptual design stage when decisions have the greatest impact. More importantly, these better simulation tools will enable the transition from prescriptive to performative energy codes, resulting in better, more efficient designs for our future built environment.

  19. Fine analysis on advanced detection of transient electromagnetic method

    Institute of Scientific and Technical Information of China (English)

    Wang Bo; Liu Shengdong; Yang Zhen; Wang Zhijun; Huang Lanying

    2012-01-01

    Fault fracture zones and water-bearing bodies in front of the driving head are the main disasters in mine laneways,thus it is important to perform their advanced detection and prediction in advance in order to provide reliable technical support for the excavation.Based on the electromagnetic induction theory,we analyzed the characteristics of primary and secondary fields with a positive and negative wave form of current,proposed the fine processing of the advanced detection with variation rate of apparent resistivity and introduced in detail the computational formulae and procedures.The result of physical simulation experiments illustrate that the tectonic interface of modules can be judged by first-order rate of apparent resistivity with a boundary error of 5%,and the position of water body determined by the fine analysis method agrees well with the result of borehole drilling.This shows that in terms of distinguishing structure and aqueous anomalies,the first-order rate of apparent resistivity is more sensitive than the secondorder rate of apparent resistivity.However,some remaining problems are suggested for future solutions.

  20. Advances in neural networks computational intelligence for ICT

    CERN Document Server

    Esposito, Anna; Morabito, Francesco; Pasero, Eros

    2016-01-01

    This carefully edited book is putting emphasis on computational and artificial intelligent methods for learning and their relative applications in robotics, embedded systems, and ICT interfaces for psychological and neurological diseases. The book is a follow-up of the scientific workshop on Neural Networks (WIRN 2015) held in Vietri sul Mare, Italy, from the 20th to the 22nd of May 2015. The workshop, at its 27th edition became a traditional scientific event that brought together scientists from many countries, and several scientific disciplines. Each chapter is an extended version of the original contribution presented at the workshop, and together with the reviewers’ peer revisions it also benefits from the live discussion during the presentation. The content of book is organized in the following sections. 1. Introduction, 2. Machine Learning, 3. Artificial Neural Networks: Algorithms and models, 4. Intelligent Cyberphysical and Embedded System, 5. Computational Intelligence Methods for Biomedical ICT in...

  1. Advances in computer technology: impact on the practice of medicine.

    Science.gov (United States)

    Groth-Vasselli, B; Singh, K; Farnsworth, P N

    1995-01-01

    Advances in computer technology provide a wide range of applications which are revolutionizing the practice of medicine. The development of new software for the office creates a web of communication among physicians, staff members, health care facilities and associated agencies. This provides the physician with the prospect of a paperless office. At the other end of the spectrum, the development of 3D work stations and software based on computational chemistry permits visualization of protein molecules involved in disease. Computer assisted molecular modeling has been used to construct working 3D models of lens alpha-crystallin. The 3D structure of alpha-crystallin is basic to our understanding of the molecular mechanisms involved in lens fiber cell maturation, stabilization of the inner nuclear region, the maintenance of lens transparency and cataractogenesis. The major component of the high molecular weight aggregates that occur during cataractogenesis is alpha-crystallin subunits. Subunits of alpha-crystallin occur in other tissues of the body. In the central nervous system accumulation of these subunits in the form of dense inclusion bodies occurs in pathological conditions such as Alzheimer's disease, Huntington's disease, multiple sclerosis and toxoplasmosis (Iwaki, Wisniewski et al., 1992), as well as neoplasms of astrocyte origin (Iwaki, Iwaki, et al., 1991). Also cardiac ischemia is associated with an increased alpha B synthesis (Chiesi, Longoni et al., 1990). On a more global level, the molecular structure of alpha-crystallin may provide information pertaining to the function of small heat shock proteins, hsp, in maintaining cell stability under the stress of disease.

  2. Advanced applications of boundary-integral equation methods

    International Nuclear Information System (INIS)

    Numerical analysis has become the basic tool for both design and research problems in solid mechanics. The boundary-integral equation (BIE) method is based on classical mathematical techniques but is finding new life as a basic stress analysis tool for engineering applications. The BIE method is based on the numerical solution of a set of integral constraint equations which couple boundary tractions (stresses) to boundary displacements. Thus the dimensionality of the problem is reduced by one; only boundary geometry and data are discretized. Stresses at any set of selected interior points are computed following the boundary solution without any further numerical approximations. Thus, the BIE method has inherently greater resolution capability for stress gradients than does the finite element method. Conversely, the BIE method is not efficient for problems involving significant inhomogeneity such as in multi-thin-layered materials, or in elastoplasticity. Some progress in applying the BIE method to the latter problem has been made but much more work remains. Further, the BIE method is only optional for problems with significant stress risers, and only when boundary stresses are more important. Interior stress calculations are expensive, per point, and can drive the solution costs up rapidly. The current report summarizes some of the advanced elastic applications of fracture mechanics and three-dimensional stress analysis, while referring some of the much broader developmental effort. (Auth.)

  3. Local Search Methods for Quantum Computers

    CERN Document Server

    Hogg, T; Hogg, Tad; Yanik, Mehmet

    1998-01-01

    Local search algorithms use the neighborhood relations among search states and often perform well for a variety of NP-hard combinatorial search problems. This paper shows how quantum computers can also use these neighborhood relations. An example of such a local quantum search is evaluated empirically for the satisfiability (SAT) problem and shown to be particularly effective for highly constrained instances. For problems with an intermediate number of constraints, it is somewhat less effective at exploiting problem structure than incremental quantum methods, in spite of the much smaller search space used by the local method.

  4. Mathematical optics classical, quantum, and computational methods

    CERN Document Server

    Lakshminarayanan, Vasudevan

    2012-01-01

    Going beyond standard introductory texts, Mathematical Optics: Classical, Quantum, and Computational Methods brings together many new mathematical techniques from optical science and engineering research. Profusely illustrated, the book makes the material accessible to students and newcomers to the field. Divided into six parts, the text presents state-of-the-art mathematical methods and applications in classical optics, quantum optics, and image processing. Part I describes the use of phase space concepts to characterize optical beams and the application of dynamic programming in optical wave

  5. Computational methods for vortex dominated compressible flows

    Science.gov (United States)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  6. Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, James H. [University of North Florida; Cox, Philip [University of North Florida; Harrington, William J [University of North Florida; Campbell, Joseph L [University of North Florida

    2013-09-03

    ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focused on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel

  7. Accelerated Matrix Element Method with Parallel Computing

    CERN Document Server

    Schouten, Doug; Stelzer, Bernd

    2014-01-01

    The matrix element method utilizes ab initio calculations of probability densities as powerful discriminants for processes of interest in experimental particle physics. The method has already been used successfully at previous and current collider experiments. However, the computational complexity of this method for final states with many particles and degrees of freedom sets it at a disadvantage compared to supervised classification methods such as decision trees, k nearest-neighbour, or neural networks. This note presents a concrete implementation of the matrix element technique using graphics processing units. Due to the intrinsic parallelizability of multidimensional integration, dramatic speedups can be readily achieved, which makes the matrix element technique viable for general usage at collider experiments.

  8. Advanced Aqueous Phase Catalyst Development using Combinatorial Methods Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Combinatorial methods are proposed to develop advanced Aqueous Oxidation Catalysts (AOCs) with the capability to mineralize organic contaminants present in...

  9. Identification of Enhancers In Human: Advances In Computational Studies

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2016-03-24

    Roughly ~50% of the human genome, contains noncoding sequences serving as regulatory elements responsible for the diverse gene expression of the cells in the body. One very well studied category of regulatory elements is the category of enhancers. Enhancers increase the transcriptional output in cells through chromatin remodeling or recruitment of complexes of binding proteins. Identification of enhancer using computational techniques is an interesting area of research and up to now several approaches have been proposed. However, the current state-of-the-art methods face limitations since the function of enhancers is clarified, but their mechanism of function is not well understood. This PhD thesis presents a bioinformatics/computer science study that focuses on the problem of identifying enhancers in different human cells using computational techniques. The dissertation is decomposed into four main tasks that we present in different chapters. First, since many of the enhancer’s functions are not well understood, we study the basic biological models by which enhancers trigger transcriptional functions and we survey comprehensively over 30 bioinformatics approaches for identifying enhancers. Next, we elaborate more on the availability of enhancer data as produced by different enhancer identification methods and experimental procedures. In particular, we analyze advantages and disadvantages of existing solutions and we report obstacles that require further consideration. To mitigate these problems we developed the Database of Integrated Human Enhancers (DENdb), a centralized online repository that archives enhancer data from 16 ENCODE cell-lines. The integrated enhancer data are also combined with many other experimental data that can be used to interpret the enhancers content and generate a novel enhancer annotation that complements the existing integrative annotation proposed by the ENCODE consortium. Next, we propose the first deep-learning computational

  10. Delamination detection using methods of computational intelligence

    Science.gov (United States)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  11. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    Science.gov (United States)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  12. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define....... To illustrate these concepts a number of examples are used. These include models of polymer membranes, distillation and catalyst behaviour. Some detailed considerations within these models are stated and discussed. Model generation concepts are introduced and ideas of a reference model are given that shows...... a taxonomy of aspects around conservation, constraints and constitutive relations. Aspects of the ICAS-MoT toolbox are given to illustrate the functionality of a computer aided modelling tool, which incorporates an interface to MS Excel....

  13. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  14. Why Video? How Technology Advances Method

    Science.gov (United States)

    Downing, Martin J., Jr.

    2008-01-01

    This paper reports on the use of video to enhance qualitative research. Advances in technology have improved our ability to capture lived experiences through visual means. I reflect on my previous work with individuals living with HIV/AIDS, the results of which are described in another paper, to evaluate the effectiveness of video as a medium that…

  15. Computational Methods for Physicists Compendium for Students

    CERN Document Server

    Sirca, Simon

    2012-01-01

    This book helps advanced undergraduate, graduate and postdoctoral students in their daily work by offering them a compendium of numerical methods. The choice of methods pays  significant attention to error estimates, stability and convergence issues as well as to the ways to optimize program execution speeds. Many examples are given throughout the chapters, and each chapter is followed by at least a handful of more comprehensive problems which may be dealt with, for example, on a weekly basis in a one- or two-semester course. In these end-of-chapter problems the physics background is pronounced, and the main text preceding them is intended as an introduction or as a later reference. Less stress is given to the explanation of individual algorithms. It is tried to induce in the reader an own independent thinking and a certain amount of scepticism and scrutiny instead of blindly following readily available commercial tools.

  16. Advanced Burnup Method using Inductively Coupled Plasma Mass Spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Hilton, Bruce A. [Idaho Natonal Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-6188 (United States); Glagolenko, Irina; Giglio, Jeffrey J.; Cummings, Daniel G

    2009-06-15

    Nuclear fuel burnup is a key parameter used to assess irradiated fuel performance, to characterize the dependence of property changes due to irradiation, and to perform nuclear materials accountability. For advanced transmutation fuels and high burnup LWR fuels that have multiple fission sources, the existing Nd-148 ASTM burnup determination practice requires input of calculated fission fractions (identifying the specific fission source isotope and neutron energy that yielded fission, e.g., U-235 from thermal neutron, U-238 from fast neutron) from computational neutronics analysis in addition to the measured concentration of a single fission product isotope. We report a novel methodology of nuclear fuel burnup determination, which is completely independent of model predictions and reactor types. The proposed method leverages the capability of Inductively Coupled Plasma Mass Spectrometry (ICP-MS) to quantify multiple fission products and actinides and uses these data to develop a system of burnup equations whose solution is the fission fractions. The fission fractions are substituted back in the equations to determine burnup. This technique requires high fidelity fission yield data, which is not uniformly available for all fission products. We discuss different means that can potentially assist in indirect determination, verification and improvement (refinement) of the ambiguously known fission yields. A variety of irradiated fuel samples are characterized by ICP-MS and the results used to test the advanced burnup method. The samples include metallic alloy fuel irradiated in fast spectrum reactor (EBRII) and metallic alloy in a tailored spectrum and dispersion fuel in the thermal spectrum of the Advanced Test Reactor (ATR). The derived fission fractions and measured burnups are compared with calculated values predicted by neutronics models. (authors)

  17. Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, James H. [University of North Florida; Cox, Philip [University of North Florida; Harrington, William J [University of North Florida; Campbell, Joseph L [University of North Florida

    2013-09-03

    ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focused on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel

  18. Quantitative Computed Tomography and image analysis for advanced muscle assessment

    Directory of Open Access Journals (Sweden)

    Kyle Joseph Edmunds

    2016-06-01

    Full Text Available Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration.

  19. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  20. Review of Computational Stirling Analysis Methods

    Science.gov (United States)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  1. Condition monitoring through advanced sensor and computational technology : final report (January 2002 to May 2005).

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung-Taek (Korea Atomic Energy Research Institute, Daejon, Korea); Luk, Vincent K.

    2005-05-01

    The overall goal of this joint research project was to develop and demonstrate advanced sensors and computational technology for continuous monitoring of the condition of components, structures, and systems in advanced and next-generation nuclear power plants (NPPs). This project included investigating and adapting several advanced sensor technologies from Korean and US national laboratory research communities, some of which were developed and applied in non-nuclear industries. The project team investigated and developed sophisticated signal processing, noise reduction, and pattern recognition techniques and algorithms. The researchers installed sensors and conducted condition monitoring tests on two test loops, a check valve (an active component) and a piping elbow (a passive component), to demonstrate the feasibility of using advanced sensors and computational technology to achieve the project goal. Acoustic emission (AE) devices, optical fiber sensors, accelerometers, and ultrasonic transducers (UTs) were used to detect mechanical vibratory response of check valve and piping elbow in normal and degraded configurations. Chemical sensors were also installed to monitor the water chemistry in the piping elbow test loop. Analysis results of processed sensor data indicate that it is feasible to differentiate between the normal and degraded (with selected degradation mechanisms) configurations of these two components from the acquired sensor signals, but it is questionable that these methods can reliably identify the level and type of degradation. Additional research and development efforts are needed to refine the differentiation techniques and to reduce the level of uncertainties.

  2. COMSAC: Computational Methods for Stability and Control. Part 2

    Science.gov (United States)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  3. Evolutionary Computing Methods for Spectral Retrieval

    Science.gov (United States)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  4. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  5. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  6. Research in Computational Aeroscience Applications Implemented on Advanced Parallel Computing Systems

    Science.gov (United States)

    Wigton, Larry

    1996-01-01

    Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.

  7. 78 FR 68058 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Science.gov (United States)

    2013-11-13

    ... AGENCY Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and..., ``Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and... period was published on September 30, 2013. At the request of the American Chemistry Council, the...

  8. Advanced Methods of Biomedical Signal Processing

    CERN Document Server

    Cerutti, Sergio

    2011-01-01

    This book grew out of the IEEE-EMBS Summer Schools on Biomedical Signal Processing, which have been held annually since 2002 to provide the participants state-of-the-art knowledge on emerging areas in biomedical engineering. Prominent experts in the areas of biomedical signal processing, biomedical data treatment, medicine, signal processing, system biology, and applied physiology introduce novel techniques and algorithms as well as their clinical or physiological applications. The book provides an overview of a compelling group of advanced biomedical signal processing techniques, such as mult

  9. Computational Efforts in Support of Advanced Coal Research

    Energy Technology Data Exchange (ETDEWEB)

    Suljo Linic

    2006-08-17

    The focus in this project was to employ first principles computational methods to study the underlying molecular elementary processes that govern hydrogen diffusion through Pd membranes as well as the elementary processes that govern the CO- and S-poisoning of these membranes. Our computational methodology integrated a multiscale hierarchical modeling approach, wherein a molecular understanding of the interactions between various species is gained from ab-initio quantum chemical Density Functional Theory (DFT) calculations, while a mesoscopic statistical mechanical model like Kinetic Monte Carlo is employed to predict the key macroscopic membrane properties such as permeability. The key developments are: (1) We have coupled systematically the ab initio calculations with Kinetic Monte Carlo (KMC) simulations to model hydrogen diffusion through the Pd based-membranes. The predicted tracer diffusivity of hydrogen atoms through the bulk of Pd lattice from KMC simulations are in excellent agreement with experiments. (2) The KMC simulations of dissociative adsorption of H{sub 2} over Pd(111) surface indicates that for thin membranes (less than 10{micro} thick), the diffusion of hydrogen from surface to the first subsurface layer is rate limiting. (3) Sulfur poisons the Pd surface by altering the electronic structure of the Pd atoms in the vicinity of the S atom. The KMC simulations indicate that increasing sulfur coverage drastically reduces the hydrogen coverage on the Pd surface and hence the driving force for diffusion through the membrane.

  10. Advances in x-ray computed microtomography at the NSLS

    International Nuclear Information System (INIS)

    The X-Ray Computed Microtomography workstation at beamline X27A at the NSLS has been utilized by scientists from a broad range of disciplines from industrial materials processing to environmental science. The most recent applications are presented here as well as a description of the facility that has evolved to accommodate a wide variety of materials and sample sizes. One of the most exciting new developments reported here resulted from a pursuit of faster reconstruction techniques. A Fast Filtered Back Transform (FFBT) reconstruction program has been developed and implemented, that is based on a refinement of the gridding algorithm first developed for use with radio astronomical data. This program has reduced the reconstruction time to 8.5 sec for a 929 x 929 pixel2 slice on an R10,000 CPU, more than 8x reduction compared with the Filtered Back-Projection method

  11. Block sparse Cholesky algorithms on advanced uniprocessor computers

    Energy Technology Data Exchange (ETDEWEB)

    Ng, E.G.; Peyton, B.W.

    1991-12-01

    As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.

  12. Catalytic Methods in Asymmetric Synthesis Advanced Materials, Techniques, and Applications

    CERN Document Server

    Gruttadauria, Michelangelo

    2011-01-01

    This book covers advances in the methods of catalytic asymmetric synthesis and their applications. Coverage moves from new materials and technologies to homogeneous metal-free catalysts and homogeneous metal catalysts. The applications of several methodologies for the synthesis of biologically active molecules are discussed. Part I addresses recent advances in new materials and technologies such as supported catalysts, supports, self-supported catalysts, chiral ionic liquids, supercritical fluids, flow reactors and microwaves related to asymmetric catalysis. Part II covers advances and milesto

  13. Modules and methods for all photonic computing

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, David R. (Knoxville, TN); Ma, Chao Hung (Oak Ridge, TN)

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  14. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    Science.gov (United States)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  15. Advanced mathematical methods in science and engineering

    CERN Document Server

    Hayek, SI

    2010-01-01

    Ordinary Differential EquationsDEFINITIONS LINEAR DIFFERENTIAL EQUATIONS OF FIRST ORDER LINEAR INDEPENDENCE AND THE WRONSKIAN LINEAR HOMOGENEOUS DIFFERENTIAL EQUATION OF ORDER N WITH CONSTANT COEFFICIENTS EULER'S EQUATION PARTICULAR SOLUTIONS BY METHOD OF UNDETERMINED COEFFICIENTS PARTICULAR SOLUTIONS BY THE METHOD OF VARIATIONS OF PARAMETERS ABEL'S FORMULA FOR THE WRONSKIAN INITIAL VALUE PROBLEMSSeries Solutions of Ordinary Differential EquationsINTRODUCTION POWER SERIES SOLUTIONS CLASSIFICATION

  16. A Computational Method for Sharp Interface Advection

    CERN Document Server

    Roenby, Johan; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists in two parts: First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple 2D and 3D interface advection problems ...

  17. Experimental and computing strategies in advanced material characterization problems

    Science.gov (United States)

    Bolzon, G.

    2015-10-01

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

  18. Experimental and computing strategies in advanced material characterization problems

    Energy Technology Data Exchange (ETDEWEB)

    Bolzon, G. [Department of Civil and Environmental Engineering, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milano, Italy gabriella.bolzon@polimi.it (Italy)

    2015-10-28

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

  19. Advanced spectral methods for climatic time series

    Science.gov (United States)

    Ghil, M.; Allen, M.R.; Dettinger, M.D.; Ide, K.; Kondrashov, D.; Mann, M.E.; Robertson, A.W.; Saunders, A.; Tian, Y.; Varadi, F.; Yiou, P.

    2002-01-01

    The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this review we describe the connections between time series analysis and nonlinear dynamics, discuss signal- to-noise enhancement, and present some of the novel methods for spectral analysis. The various steps, as well as the advantages and disadvantages of these methods, are illustrated by their application to an important climatic time series, the Southern Oscillation Index. This index captures major features of interannual climate variability and is used extensively in its prediction. Regional and global sea surface temperature data sets are used to illustrate multivariate spectral methods. Open questions and further prospects conclude the review.

  20. Computational Evaluation of the Traceback Method

    Science.gov (United States)

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  1. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    Science.gov (United States)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  2. Parallel computing in genomic research: advances and applications

    Directory of Open Access Journals (Sweden)

    Ocaña K

    2015-11-01

    Full Text Available Kary Ocaña,1 Daniel de Oliveira2 1National Laboratory of Scientific Computing, Petrópolis, Rio de Janeiro, 2Institute of Computing, Fluminense Federal University, Niterói, Brazil Abstract: Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. Keywords: high-performance computing, genomic research, cloud computing, grid computing, cluster computing, parallel computing

  3. Recent advances in coupled-cluster methods

    CERN Document Server

    Bartlett, Rodney J

    1997-01-01

    Today, coupled-cluster (CC) theory has emerged as the most accurate, widely applicable approach for the correlation problem in molecules. Furthermore, the correct scaling of the energy and wavefunction with size (i.e. extensivity) recommends it for studies of polymers and crystals as well as molecules. CC methods have also paid dividends for nuclei, and for certain strongly correlated systems of interest in field theory.In order for CC methods to have achieved this distinction, it has been necessary to formulate new, theoretical approaches for the treatment of a variety of essential quantities

  4. PPIRank - an advanced method for ranking protein-protein interations in TAP/MS data

    OpenAIRE

    Sun, Xiaoyun; Hong, Pengyu; Kulkarni, Meghana; Kwon, Young; Perrimon, Norbert

    2013-01-01

    Background: Tandem affinity purification coupled with mass-spectrometry (TAP/MS) analysis is a popular method for the identification of novel endogenous protein-protein interactions (PPIs) in large-scale. Computational analysis of TAP/MS data is a critical step, particularly for high-throughput datasets, yet it remains challenging due to the noisy nature of TAP/MS data. Results: We investigated several major TAP/MS data analysis methods for identifying PPIs, and developed an advanced method, ...

  5. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    Science.gov (United States)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  6. Advanced Aqueous Phase Catalyst Development using Combinatorial Methods Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The use of combinatorial methods is proposed to rapidly screen catalyst formulations for the advanced development of aqueous phase oxidation catalysts with greater...

  7. Advanced Bayesian Methods for Lunar Surface Navigation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation of this project is the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with an...

  8. Advanced Bayesian Methods for Lunar Surface Navigation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation of this project will be the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with...

  9. Advanced methods of treatment of hypophysis adenoma

    Directory of Open Access Journals (Sweden)

    Kan Ya.A.

    2011-03-01

    Full Text Available Hypophysis adenomas are mostly spread in the chiasmatic cellular area. They account 18% of all new brain formations, the structure of pituitary adenomas includes prolactinomas in a large number of cases which are manifested by the syndrome of hyperprolactinemia and hormone inactive hypophysis tumours (35%. Somatotropins (13-15% are lower in frequency, the main clinical feature is acromegalia. One can rarely reveal corticotropins (8-10%, gonadotro-pins (7-9% and thyrotropins (1% and their mixed forms. Transsphenoidal surgical interventions are considered to be methods of choice treatment of hypophysis adenomas and other formations in the chiasmatic cellular area. Alternative methods of treatment are conservative. They can be as an addition to microsurgery (radiotherapy

  10. Advanced diagnostic methods for human brucellosis

    OpenAIRE

    Taleski, Vaso; Kunguloski, Dzoko

    2011-01-01

    Brucellosis is a typical zoonotic disease caused by organisms of genus brucella. Humans become infected by ingestion of animal food products, direct contact with infected animals or inhalation of infectious aerosols. Variable symptoms, sub-clinical and atypical infections make diagnosis of human brucellosis difficult. Objective of this paper is to evaluate specificity and sensitivity of different diagnostic methods, on large number of samples, in patients at different stages of...

  11. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    Science.gov (United States)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  12. Computational Methods for Multi-dimensional Neutron Diffusion Problems

    Energy Technology Data Exchange (ETDEWEB)

    Song Han

    2009-10-15

    Lead-cooled fast reactor (LFR) has potential for becoming one of the advanced reactor types in the future. Innovative computational tools for system design and safety analysis on such NPP systems are needed. One of the most popular trends is coupling multi-dimensional neutron kinetics (NK) with thermal-hydraulic (T-H) to enhance the capability of simulation of the NPP systems under abnormal conditions or during rare severe accidents. Therefore, various numerical methods applied in the NK module should be reevaluated to adapt the scheme of coupled code system. In the author's present work a neutronic module for the solution of two dimensional steady-state multigroup diffusion problems in nuclear reactor cores is developed. The module can produce both direct fluxes as well as adjoints, i.e. neutron importances. Different numerical schemes are employed. A standard finite-difference (FD) approach is firstly implemented, mainly to serve as a reference for less computationally challenging schemes, such as transverse-integrated nodal methods (TINM) and boundary element methods (BEM), which are considered in the second part of the work. The validation of the methods proposed is carried out by comparisons of the results for some reference structures. In particular a critical problem for a homogeneous reactor for which an analytical solution exists is considered as a benchmark. The computational module is then applied to a fast spectrum system, having physical characteristics similar to the proposed European Lead-cooled System (ELSY) project. The results show the effectiveness of the numerical techniques presented. The flexibility and the possibility to obtain neutron importances allow the use of the module for parametric studies, design assessments and integral parameter evaluations, as well as for future sensitivity and perturbation analyses and as a shape solver for time-dependent procedures

  13. The Advance of Computing from the Ground to the Cloud

    Science.gov (United States)

    Breeding, Marshall

    2009-01-01

    A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

  14. Advances in organometallic synthesis with mechanochemical methods.

    Science.gov (United States)

    Rightmire, Nicholas R; Hanusa, Timothy P

    2016-02-14

    Solvent-based syntheses have long been normative in all areas of chemistry, although mechanochemical methods (specifically grinding and milling) have been used to good effect for decades in organic, and to a lesser but growing extent, inorganic coordination chemistry. Organometallic synthesis, in contrast, represents a relatively underdeveloped area for mechanochemical research, and the potential benefits are considerable. From access to new classes of unsolvated complexes, to control over stoichiometries that have not been observed in solution routes, mechanochemical (or 'M-chem') approaches have much to offer the synthetic chemist. It has already become clear that removing the solvent from an organometallic reaction can change reaction pathways considerably, so that prediction of the outcome is not always straightforward. This Perspective reviews recent developments in the field, and describes equipment that can be used in organometallic synthesis. Synthetic chemists are encouraged to add mechanochemical methods to their repertoire in the search for new and highly reactive metal complexes and novel types of organometallic transformations. PMID:26763151

  15. Advancements in Research Synthesis Methods: From a Methodologically Inclusive Perspective

    Science.gov (United States)

    Suri, Harsh; Clarke, David

    2009-01-01

    The dominant literature on research synthesis methods has positivist and neo-positivist origins. In recent years, the landscape of research synthesis methods has changed rapidly to become inclusive. This article highlights methodologically inclusive advancements in research synthesis methods. Attention is drawn to insights from interpretive,…

  16. Tutorial on Computing: Technological Advances, Social Implications, Ethical and Legal Issues

    OpenAIRE

    Debnath, Narayan

    2012-01-01

    Computing and information technology have made significant advances. The use of computing and technology is a major aspect of our lives, and this use will only continue to increase in our lifetime. Electronic digital computers and high performance communication networks are central to contemporary information technology. The computing applications in a wide range of areas including business, communications, medical research, transportation, entertainments, and education are transforming lo...

  17. Relaxed resource advance reservation policy in grid computing

    Institute of Scientific and Technical Information of China (English)

    XIAO Peng; HU Zhi-gang

    2009-01-01

    The advance reservation technique has been widely applied in many grid systems to provide end-to-end quality of service (QoS). However, it will result in low resource utilization rate and high rejection rate when the reservation rate is high. To mitigate these negative effects brought about by advance reservation, a relaxed advance reservation policy is proposed, which allows accepting new reservation requests that overlap the existing reservations under certain conditions. Both the benefits and the risks of the proposed policy are presented theoretically. The experimental results show that the policy can achieve a higher resource utilization rate and lower rejection rate compared to the conventional reservation policy and backfilling technique. In addition, the policy shows better adaptation when the grid systems are in the presence of a high reservation rate.

  18. ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    Goodarz Ahmadi

    2004-10-01

    In this project, an Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column was developed. The approach used an Eulerian analysis of liquid flows in the bubble column, and made use of the Lagrangian trajectory analysis for the bubbles and particle motions. The bubble-bubble and particle-particle collisions are included the model. The model predictions are compared with the experimental data and good agreement was found An experimental setup for studying two-dimensional bubble columns was developed. The multiphase flow conditions in the bubble column were measured using optical image processing and Particle Image Velocimetry techniques (PIV). A simple shear flow device for bubble motion in a constant shear flow field was also developed. The flow conditions in simple shear flow device were studied using PIV method. Concentration and velocity of particles of different sizes near a wall in a duct flow was also measured. The technique of Phase-Doppler anemometry was used in these studies. An Eulerian volume of fluid (VOF) computational model for the flow condition in the two-dimensional bubble column was also developed. The liquid and bubble motions were analyzed and the results were compared with observed flow patterns in the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were also analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures were also studied. The simulation results were compared with the experimental data and discussed A thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion was developed. The balance laws were obtained and the constitutive laws established.

  19. ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS

    International Nuclear Information System (INIS)

    In this project, an Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column was developed. The approach used an Eulerian analysis of liquid flows in the bubble column, and made use of the Lagrangian trajectory analysis for the bubbles and particle motions. The bubble-bubble and particle-particle collisions are included the model. The model predictions are compared with the experimental data and good agreement was found An experimental setup for studying two-dimensional bubble columns was developed. The multiphase flow conditions in the bubble column were measured using optical image processing and Particle Image Velocimetry techniques (PIV). A simple shear flow device for bubble motion in a constant shear flow field was also developed. The flow conditions in simple shear flow device were studied using PIV method. Concentration and velocity of particles of different sizes near a wall in a duct flow was also measured. The technique of Phase-Doppler anemometry was used in these studies. An Eulerian volume of fluid (VOF) computational model for the flow condition in the two-dimensional bubble column was also developed. The liquid and bubble motions were analyzed and the results were compared with observed flow patterns in the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were also analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures were also studied. The simulation results were compared with the experimental data and discussed A thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion was developed. The balance laws were obtained and the constitutive laws established

  20. Application of the Advanced Distillation Curve Method to Fuels for Advanced Combustion Engine Gasolines

    KAUST Repository

    Burger, Jessica L.

    2015-07-16

    © This article not subject to U.S. Copyright. Published 2015 by the American Chemical Society. Incremental but fundamental changes are currently being made to fuel composition and combustion strategies to diversify energy feedstocks, decrease pollution, and increase engine efficiency. The increase in parameter space (by having many variables in play simultaneously) makes it difficult at best to propose strategic changes to engine and fuel design by use of conventional build-and-test methodology. To make changes in the most time- and cost-effective manner, it is imperative that new computational tools and surrogate fuels are developed. Currently, sets of fuels are being characterized by industry groups, such as the Coordinating Research Council (CRC) and other entities, so that researchers in different laboratories have access to fuels with consistent properties. In this work, six gasolines (FACE A, C, F, G, I, and J) are characterized by the advanced distillation curve (ADC) method to determine the composition and enthalpy of combustion in various distillate volume fractions. Tracking the composition and enthalpy of distillate fractions provides valuable information for determining structure property relationships, and moreover, it provides the basis for the development of equations of state that can describe the thermodynamic properties of these complex mixtures and lead to development of surrogate fuels composed of major hydrocarbon classes found in target fuels.

  1. Approximation method to compute domain related integrals in structural studies

    Science.gov (United States)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2015-11-01

    Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the

  2. Method and ethics in advancing jury research.

    Science.gov (United States)

    Robertshaw, P

    1998-10-01

    In this article the contemporary problems of the jury and jury research are considered. This is timely, in view of the current Home Office Consultation Paper on the future of, and alternatives to, the jury in serious fraud trials, to which the author has submitted representations on its jury aspects. The research position is dominated by the prohibitions in the Contempt of Court Act 1981. The types of indirect research on jury deliberation which have been achieved within this stricture are outlined. In the USA, direct research of the jury is possible but, for historical reasons, it has been in television documentaries that direct observation of the deliberation process has been achieved. The first issue is discussed and the problems of inauthenticity, 'the observer effect', and of existential invalidity in 'mock' or 'shadow' juries are noted. Finally, the kinds of issues that could be addressed if licensed jury deliberation research was legalized, are proposed. It is also suggested that there are methods available to transcend the problems associated with American direct research. PMID:9808945

  3. Center for Advanced Energy Studies: Computer Assisted Virtual Environment (CAVE)

    Data.gov (United States)

    Federal Laboratory Consortium — The laboratory contains a four-walled 3D computer assisted virtual environment - or CAVE TM — that allows scientists and engineers to literally walk into their data...

  4. Building an Advanced Computing Environment with SAN Support

    Institute of Scientific and Technical Information of China (English)

    DajianYANG; MeiMA; 等

    2001-01-01

    The current computing environment of our Computing Center in IHEP uses a SAS (server Attached Storage)architecture,attaching all the storage devices directly to the machines.This kind of storage strategy can't meet the requirement of our BEPC II/BESⅢ project properly.Thus we design and implement a SAN-based computing environment,which consists of several computing farms,a three-level storage pool,a set of storage management software and a web-based data management system.The feature of ours system includes cross-platform data sharing,fast data access,high scalability,convenient storage management and data management.

  5. Advances in Physarum machines sensing and computing with Slime mould

    CERN Document Server

    2016-01-01

    This book is devoted to Slime mould Physarum polycephalum, which is a large single cell capable for distributed sensing, concurrent information processing, parallel computation and decentralized actuation. The ease of culturing and experimenting with Physarum makes this slime mould an ideal substrate for real-world implementations of unconventional sensing and computing devices The book is a treatise of theoretical and experimental laboratory studies on sensing and computing properties of slime mould, and on the development of mathematical and logical theories of Physarum behavior. It is shown how to make logical gates and circuits, electronic devices (memristors, diodes, transistors, wires, chemical and tactile sensors) with the slime mould. The book demonstrates how to modify properties of Physarum computing circuits with functional nano-particles and polymers, to interface the slime mould with field-programmable arrays, and to use Physarum as a controller of microbial fuel cells. A unique multi-agent model...

  6. Advances in computational design and analysis of airbreathing propulsion systems

    Science.gov (United States)

    Klineberg, John M.

    1989-01-01

    The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.

  7. Advanced Methods of Observing Surface Plasmon Polaritons and Magnons

    Science.gov (United States)

    Moghaddam, Abolghasem Mobaraki

    Available from UMI in association with The British Library. Requires signed TDF. The primary objectives of this thesis are the investigation of the theoretical and experimental aspects of the design and construction of advanced techniques for the excitation of surface plasmon-polaritons, surface magneto -plasmon-polaritons and surface magnons. They involve on -line observation of these phenomena and to accomplish these goals, analytical studies of the characteristic behaviour of these phenomena have been undertaken. For excitations of surface plasmon- and surface magneto-plasmon-polaritons the most robust and conventional configuration, namely Prism-Medium-Air, coupled to a novel angle scan (prism spinning) method was employed. The system to be described here can automatically measure the reflectivity of a multilayer system over a range of angles that includes the resonance angle in an Attenuated Total Reflection (ATR) experiment. The computer procedure that controls the system is quite versatile so that it allows any right-angle prism of different angle or refractive index to be utilised. It also provided probes to check for optical alignment within the system. Moreover, it performs the angular scan many times and then averages the results in order to reduce the environmental and other possible sources of noise within the system. The mechanical side of the system is unique and could eventually be adopted as a marketable piece of equipment. It consists of a turntable for holding the prism-sample assembly and a drive motor in conjunction with a servo-potentiometer whose output not only operates the turntable but also sends a signal to a computer to measure accurately its position. The interface unit enables a computer to control automatically an angular scan ATR experiment for measuring the resonance reflectivity spectrum of a multilayer system. The interface unit uses an H-bridge switch formed by four bipolar power transistor and two small signal MOSFETs to convert

  8. Advancements in Violin-Related Human-Computer Interaction

    DEFF Research Database (Denmark)

    Overholt, Daniel

    2014-01-01

    of human intelligence and emotion is at the core of the Musical Interface Technology Design Space, MITDS. This is a framework that endeavors to retain and enhance such traits of traditional instruments in the design of interactive live performance interfaces. Utilizing the MITDS, advanced Human...

  9. Advances in Computing and Information Technology : Proceedings of the Second International

    CERN Document Server

    Nagamalai, Dhinaharan; Chaki, Nabendu

    2012-01-01

    The international conference on Advances in Computing and Information technology (ACITY 2012) provides an excellent international forum for both academics and professionals for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Second International Conference on Advances in Computing and Information technology (ACITY 2012), held in Chennai, India, during July 13-15, 2012, covered a number of topics in all major fields of Computer Science and Information Technology including: networking and communications, network security and applications, web and internet computing, ubiquitous computing, algorithms, bioinformatics, digital image processing and pattern recognition, artificial intelligence, soft computing and applications. Upon a strength review process, a number of high-quality, presenting not only innovative ideas but also a founded evaluation and a strong argumentation of the same, were selected and collected in the present proceedings, ...

  10. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  11. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  12. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  13. Advances in a computer aided bilateral manipulator system

    International Nuclear Information System (INIS)

    This paper relates developments and experiments carried at Saclay in the frame of ARA/sup b/ program by the computer aided teleoperation (CAT) group. The goal is to improve efficiency and operational safety of remote operations using computer and sensors. They enable to substitute to the to the operator(s) in time sharing and/or in parallel, and augment amount and/or quality of sensory feedback. After describing the test facility in Saclay, the developments of various participants are described. Result of this work will be commercially available with the MA23M and future MAE 200 at La Calhene (France, UK, Japan)

  14. Strategy to Promote Active Learning of an Advanced Research Method

    Science.gov (United States)

    McDermott, Hilary J.; Dovey, Terence M.

    2013-01-01

    Research methods courses aim to equip students with the knowledge and skills required for research yet seldom include practical aspects of assessment. This reflective practitioner report describes and evaluates an innovative approach to teaching and assessing advanced qualitative research methods to final-year psychology undergraduate students. An…

  15. 77 FR 45345 - DOE/Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2012-07-31

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY DOE... at (301) 903-7486 or email at: Melea.Baker@science.doe.gov . You must make your request for an oral... Computing Web site ( www.sc.doe.gov/ascr ) for viewing. Issued at Washington, DC, on July 25, 2012....

  16. 77 FR 62231 - DOE/Advanced Scientific Computing Advisory Committee

    Science.gov (United States)

    2012-10-12

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY DOE.... Computational Science Graduate Fellowship (CSGF) Longitudinal Study. Update on Exascale. Update from DOE data... contact Melea Baker, (301) 903-7486 or by email at: Melea.Baker@science.doe.gov . You must make...

  17. Advanced Simulation and Computing Co-Design Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Ang, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoang, Thuc T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); McPherson, Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Neely, Rob [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.

  18. Advanced Micro Optics Characterization Using Computer Generated Holograms

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, S.; Maxey, L.C.; Moreshead, W.; Nogues, J.L.

    1998-11-01

    This CRADA has enabled the validation of Computer Generated Holograms (CGH) testing for certain classes of micro optics. It has also identified certain issues that are significant when considering the use of CGHs in this application. Both contributions are advantageous in the pursuit of better manufacturing and testing technologies for these important optical components.

  19. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  20. Integrated Computer Aided Planning and Manufacture of Advanced Technology Jet Engines

    Directory of Open Access Journals (Sweden)

    B. K. Subhas

    1987-10-01

    Full Text Available This paper highlights an attempt at evolving a computer aided manufacturing system on a personal computer. A case study of an advanced technology jet engine component is included to illustrate various outputs from the system. The proposed system could be an alternate solution to sophisticated and expensive CAD/CAM workstations.

  1. UNEDF: Advanced Scientific Computing Transforms the Low-Energy Nuclear Many-Body Problem

    CERN Document Server

    Stoitsov, M; Nazarewicz, W; Bulgac, A; Hagen, G; Kortelainen, M; Pei, J C; Roche, K J; Schunck, N; Thompson, I; Vary, J P; Wild, S M

    2011-01-01

    The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper illustrates significant milestones accomplished by UNEDF through integration of the theoretical approaches, advanced numerical algorithms, and leadership class computational resources.

  2. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    Science.gov (United States)

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  3. Wing analysis using a transonic potential flow computational method

    Science.gov (United States)

    Henne, P. A.; Hicks, R. M.

    1978-01-01

    The ability of the method to compute wing transonic performance was determined by comparing computed results with both experimental data and results computed by other theoretical procedures. Both pressure distributions and aerodynamic forces were evaluated. Comparisons indicated that the method is a significant improvement in transonic wing analysis capability. In particular, the computational method generally calculated the correct development of three-dimensional pressure distributions from subcritical to transonic conditions. Complicated, multiple shocked flows observed experimentally were reproduced computationally. The ability to identify the effects of design modifications was demonstrated both in terms of pressure distributions and shock drag characteristics.

  4. Advance in research on aerosol deposition simulation methods

    International Nuclear Information System (INIS)

    A comprehensive analysis of the health effects of inhaled toxic aerosols requires exact data on airway deposition. A knowledge of the effect of inhaled drugs is essential to the optimization of aerosol drug delivery. Sophisticated analytical deposition models can be used for the computation of total, regional and generation specific deposition efficiencies. The continuously enhancing computer seem to allow us to study the particle transport and deposition in more and more realistic airway geometries with the help of computational fluid dynamics (CFD) simulation method. In this article, the trends in aerosol deposition models and lung models, and the methods for achievement of deposition simulations are also reviewed. (authors)

  5. Advances in bio-inspired computing for combinatorial optimization problems

    CERN Document Server

    Pintea, Camelia-Mihaela

    2013-01-01

    Advances in Bio-inspired Combinatorial Optimization Problems' illustrates several recent bio-inspired efficient algorithms for solving NP-hard problems.Theoretical bio-inspired concepts and models, in particular for agents, ants and virtual robots are described. Large-scale optimization problems, for example: the Generalized Traveling Salesman Problem and the Railway Traveling Salesman Problem, are solved and their results are discussed.Some of the main concepts and models described in this book are: inner rule to guide ant search - a recent model in ant optimization, heterogeneous sensitive a

  6. Advanced Modulation Techniques for High-Performance Computing Optical Interconnects

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko;

    2013-01-01

    We experimentally assess the performance of a 64 × 64 optical switch fabric used for ns-speed optical cell switching in supercomputer optical interconnects. More specifically, we study four alternative modulation formats and detection schemes, namely, 10-Gb/s nonreturn-to-zero differential phase...... of the optical shared memory supercomputer interconnect system switch fabric. In particular, we investigate the resilience of the aforementioned advanced modulation formats to the nonlinearities of semiconductor optical amplifiers, used as ON/OFF gates in the supercomputer optical switch fabric under study...

  7. Advances in Computer Science and Information Engineering Volume 1

    CERN Document Server

    Lin, Sally

    2012-01-01

    CSIE2012 is an integrated conference concentrating its focus on Computer Science and Information Engineering . In the proceeding, you can learn much more knowledge about Computer Science and Information Engineering of researchers from all around the world. The main role of the proceeding is to be used as an exchange pillar for researchers who are working in the mentioned fields. In order to meet the high quality of Springer, AISC series, the organization committee has made their efforts to do the following things. Firstly, poor quality paper has been refused after reviewing course by anonymous referee experts. Secondly, periodically review meetings have been held around the reviewers about five times for exchanging reviewing suggestions. Finally, the conference organizers had several preliminary sessions before the conference. Through efforts of different people and departments, the conference will be successful and fruitful.

  8. Recent advances in swarm intelligence and evolutionary computation

    CERN Document Server

    2015-01-01

    This timely review volume summarizes the state-of-the-art developments in nature-inspired algorithms and applications with the emphasis on swarm intelligence and bio-inspired computation. Topics include the analysis and overview of swarm intelligence and evolutionary computation, hybrid metaheuristic algorithms, bat algorithm, discrete cuckoo search, firefly algorithm, particle swarm optimization, and harmony search as well as convergent hybridization. Application case studies have focused on the dehydration of fruits and vegetables by the firefly algorithm and goal programming, feature selection by the binary flower pollination algorithm, job shop scheduling, single row facility layout optimization, training of feed-forward neural networks, damage and stiffness identification, synthesis of cross-ambiguity functions by the bat algorithm, web document clustering, truss analysis, water distribution networks, sustainable building designs and others. As a timely review, this book can serve as an ideal reference f...

  9. Advances in Computer Science and Information Engineering Volume 2

    CERN Document Server

    Lin, Sally

    2012-01-01

    CSIE2012 is an integrated conference concentrating its focus on Computer Science and Information Engineering . In the proceeding, you can learn much more knowledge about Computer Science and Information Engineering of researchers from all around the world. The main role of the proceeding is to be used as an exchange pillar for researchers who are working in the mentioned fields. In order to meet the high quality of Springer, AISC series, the organization committee has made their efforts to do the following things. Firstly, poor quality paper has been refused after reviewing course by anonymous referee experts. Secondly, periodically review meetings have been held around the reviewers about five times for exchanging reviewing suggestions. Finally, the conference organizers had several preliminary sessions before the conference. Through efforts of different people and departments, the conference will be successful and fruitful.

  10. National facility for advanced computational science: A sustainable path to scientific discovery

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  11. Parallel computing in genomic research: advances and applications.

    Science.gov (United States)

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  12. Atomistic Method Applied to Computational Modeling of Surface Alloys

    Science.gov (United States)

    Bozzolo, Guillermo H.; Abel, Phillip B.

    2000-01-01

    The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on

  13. A method of billing third generation computer users

    Science.gov (United States)

    Anderson, P. N.; Hyter, D. R.

    1973-01-01

    A method is presented for charging users for the processing of their applications on third generation digital computer systems is presented. For background purposes, problems and goals in billing on third generation systems are discussed. Detailed formulas are derived based on expected utilization and computer component cost. These formulas are then applied to a specific computer system (UNIVAC 1108). The method, although possessing some weaknesses, is presented as a definite improvement over use of second generation billing methods.

  14. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Kevin L., E-mail: kevinmoore@ucsd.edu; Moiseenko, Vitali [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92093 (United States); Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504 (Greece); McNutt, Todd R. [Department of Radiation Oncology and Molecular Radiation Science, School of Medicine, Johns Hopkins University, Baltimore, Maryland 21231 (United States); Mutic, Sasa [Department of Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri 63110 (United States)

    2014-01-15

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

  15. Multigrid Methods in Lattice Field Computations

    CERN Document Server

    Brandt, A

    1992-01-01

    The multigrid methodology is reviewed. By integrating numerical processes at all scales of a problem, it seeks to perform various computational tasks at a cost that rises as slowly as possible as a function of $n$, the number of degrees of freedom in the problem. Current and potential benefits for lattice field computations are outlined. They include: $O(n)$ solution of Dirac equations; just $O(1)$ operations in updating the solution (upon any local change of data, including the gauge field); similar efficiency in gauge fixing and updating; $O(1)$ operations in updating the inverse matrix and in calculating the change in the logarithm of its determinant; $O(n)$ operations per producing each independent configuration in statistical simulations (eliminating CSD), and, more important, effectively just $O(1)$ operations per each independent measurement (eliminating the volume factor as well). These potential capabilities have been demonstrated on simple model problems. Extensions to real life are explored.

  16. Advances in Computational Social Science and Social Simulation

    OpenAIRE

    Miguel Quesada, Francisco J.; Amblard, Frédéric; Juan A. Barceló; Madella, Marco; Aguirre, Cristián; Ahrweiler, Petra; Aldred, Rachel; Ali Abbas, Syed Muhammad; Lopez Rojas, Edgar Alonso; Alonso Betanzos, Amparo; Alvarez Galvez, Javier; Andrighetto, Giulia; Antunes, Luis; Araghi, Yashar; Asatani, Kimitaka

    2014-01-01

    Aquesta conferència és la celebració conjunta de la "10th Artificial Economics Conference AE", la "10th Conference of the European Social Simulation Association ESSA" i la "1st Simulating the Past to Understand Human History SPUHH". Conferència organitzada pel Laboratory for Socio­-Historical Dynamics Simulation (LSDS-­UAB) de la Universitat Autònoma de Barcelona. Readers will find results of recent research on computational social science and social simulation economics, management, so...

  17. Advanced wellbore thermal simulator GEOTEMP2. Appendix. Computer program listing

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R.F.

    1982-02-01

    This appendix gives the program listing of GEOTEMP2 with comments and discussion to make the program organization more understandable. This appendix is divided into an introduction and four main blocks of code: main program, program initiation, wellbore flow, and wellbore heat transfer. The purpose and use of each subprogram is discussed and the program listing is given. Flowcharts will be included to clarify code organization when needed. GEOTEMP2 was written in FORTRAN IV. Efforts have been made to keep the programing as conventional as possible so that GEOTEMP2 will run without modification on most computers.

  18. Advanced and intelligent computations in diagnosis and control

    CERN Document Server

    2016-01-01

    This book is devoted to the demands of research and industrial centers for diagnostics, monitoring and decision making systems that result from the increasing complexity of automation and systems, the need to ensure the highest level of reliability and safety, and continuing research and the development of innovative approaches to fault diagnosis. The contributions combine domains of engineering knowledge for diagnosis, including detection, isolation, localization, identification, reconfiguration and fault-tolerant control. The book is divided into six parts:  (I) Fault Detection and Isolation; (II) Estimation and Identification; (III) Robust and Fault Tolerant Control; (IV) Industrial and Medical Diagnostics; (V) Artificial Intelligence; (VI) Expert and Computer Systems.

  19. Statistical methods and computing for big data

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  20. Advances in Intelligent Control Systems and Computer Science

    CERN Document Server

    2013-01-01

    The conception of real-time control networks taking into account, as an integrating approach, both the specific aspects of information and knowledge processing and the dynamic and energetic particularities of physical processes and of communication networks is representing one of the newest scientific and technological challenges. The new paradigm of Cyber-Physical Systems (CPS) reflects this tendency and will certainly change the evolution of the technology, with major social and economic impact. This book presents significant results in the field of process control and advanced information and knowledge processing, with applications in the fields of robotics, biotechnology, environment, energy, transportation, et al.. It introduces intelligent control concepts and strategies as well as real-time implementation aspects for complex control approaches. One of the sections is dedicated to the complex problem of designing software systems for distributed information processing networks. Problems as complexity an...

  1. Computational Methods for Analyzing Health News Coverage

    Science.gov (United States)

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…

  2. Neutral particle transport based on the advanced method of characteristics (MOCHA)

    International Nuclear Information System (INIS)

    The paper describes the development of MOCHA, the advanced method of characteristics, based on the CHAR and ANEMONA codes, and its applications in a number of assembly and cell calculations. The MOCHA presents an attempt to satisfy the need imposed by the advanced reactor designs by providing the computational ability to account for all heterogeneities within the fuel assembly, the capability of general multi-dimensional geometry simulation, the flexibility in energy-group structure, the capability of multi-assembly simulation, accurate burn-up calculation, and linearly anisotropic scattering approximation

  3. Current Advances in the Computational Simulation of the Formation of Low-Mass Stars

    Energy Technology Data Exchange (ETDEWEB)

    Klein, R I; Inutsuka, S; Padoan, P; Tomisaka, K

    2005-10-24

    Developing a theory of low-mass star formation ({approx} 0.1 to 3 M{sub {circle_dot}}) remains one of the most elusive and important goals of theoretical astrophysics. The star-formation process is the outcome of the complex dynamics of interstellar gas involving non-linear interactions of turbulence, gravity, magnetic field and radiation. The evolution of protostellar condensations, from the moment they are assembled by turbulent flows to the time they reach stellar densities, spans an enormous range of scales, resulting in a major computational challenge for simulations. Since the previous Protostars and Planets conference, dramatic advances in the development of new numerical algorithmic techniques have been successfully implemented on large scale parallel supercomputers. Among such techniques, Adaptive Mesh Refinement and Smooth Particle Hydrodynamics have provided frameworks to simulate the process of low-mass star formation with a very large dynamic range. It is now feasible to explore the turbulent fragmentation of molecular clouds and the gravitational collapse of cores into stars self-consistently within the same calculation. The increased sophistication of these powerful methods comes with substantial caveats associated with the use of the techniques and the interpretation of the numerical results. In this review, we examine what has been accomplished in the field and present a critique of both numerical methods and scientific results. We stress that computational simulations should obey the available observational constraints and demonstrate numerical convergence. Failing this, results of large scale simulations do not advance our understanding of low-mass star formation.

  4. METHODS ADVANCEMENT FOR MILK ANALYSIS: THE MAMA STUDY

    Science.gov (United States)

    The Methods Advancement for Milk Analysis (MAMA) study was designed by US EPA and CDC investigators to provide data to support the technological and study design needs of the proposed National Children=s Study (NCS). The NCS is a multi-Agency-sponsored study, authorized under the...

  5. Development of computational methods for heavy lift launch vehicles

    Science.gov (United States)

    Yoon, Seokkwan; Ryan, James S.

    1993-01-01

    The research effort has been focused on the development of an advanced flow solver for complex viscous turbulent flows with shock waves. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. A new computer program named CENS3D has been developed for viscous turbulent flows with discontinuities. Details of the code are described in Appendix A and Appendix B. With the developments of the numerical algorithm and dissipation model, the simulation of three-dimensional viscous compressible flows has become more efficient and accurate. The results of the research are expected to yield a direct impact on the design process of future liquid fueled launch systems.

  6. 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT)

    CERN Document Server

    Lokajicek, M; Tumova, N

    2015-01-01

    16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT). The ACAT workshop series, formerly AIHENP (Artificial Intelligence in High Energy and Nuclear Physics), was created back in 1990. Its main purpose is to gather researchers related with computing in physics research together, from both physics and computer science sides, and bring them a chance to communicate with each other. It has established bridges between physics and computer science research, facilitating the advances in our understanding of the Universe at its smallest and largest scales. With the Large Hadron Collider and many astronomy and astrophysics experiments collecting larger and larger amounts of data, such bridges are needed now more than ever. The 16th edition of ACAT aims to bring related researchers together, once more, to explore and confront the boundaries of computing, automatic data analysis and theoretical calculation technologies. It will create a forum for exchanging ideas among the fields an...

  7. UNEDF: Advanced Scientific Computing Collaboration Transforms the Low-Energy Nuclear Many-Body Problem

    CERN Document Server

    Nam, H; Nazarewicz, W; Bulgac, A; Hagen, G; Kortelainen, M; Maris, P; Pei, J C; Roche, K J; Schunck, N; Thompson, I; Vary, J P; Wild, S M

    2012-01-01

    The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multidisciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. We illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadershi...

  8. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  9. Recent advances in computational intelligence in defense and security

    CERN Document Server

    Falcon, Rafael; Zincir-Heywood, Nur; Abbass, Hussein

    2016-01-01

    This volume is an initiative undertaken by the IEEE Computational Intelligence Society’s Task Force on Security, Surveillance and Defense to consolidate and disseminate the role of CI techniques in the design, development and deployment of security and defense solutions. Applications range from the detection of buried explosive hazards in a battlefield to the control of unmanned underwater vehicles, the delivery of superior video analytics for protecting critical infrastructures or the development of stronger intrusion detection systems and the design of military surveillance networks. Defense scientists, industry experts, academicians and practitioners alike will all benefit from the wide spectrum of successful applications compiled in this volume. Senior undergraduate or graduate students may also discover uncharted territory for their own research endeavors.

  10. Some advanced parametric methods for assessing waveform distortion in a smart grid with renewable generation

    Science.gov (United States)

    Alfieri, Luisa

    2015-12-01

    Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.

  11. An experimental unification of reservoir computing methods.

    Science.gov (United States)

    Verstraeten, D; Schrauwen, B; D'Haene, M; Stroobandt, D

    2007-04-01

    Three different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) learning rule. Individual descriptions of these techniques exist, but a overview is still lacking. Here, we present a series of experimental results that compares all three implementations, and draw conclusions about the relation between a broad range of reservoir parameters and network dynamics, memory, node complexity and performance on a variety of benchmark tests with different characteristics. Next, we introduce a new measure for the reservoir dynamics based on Lyapunov exponents. Unlike previous measures in the literature, this measure is dependent on the dynamics of the reservoir in response to the inputs, and in the cases we tried, it indicates an optimal value for the global scaling of the weight matrix, irrespective of the standard measures. We also describe the Reservoir Computing Toolbox that was used for these experiments, which implements all the types of Reservoir Computing and allows the easy simulation of a wide range of reservoir topologies for a number of benchmarks.

  12. Computational Methods for Jet Noise Simulation

    Science.gov (United States)

    Goodrich, John W. (Technical Monitor); Hagstrom, Thomas

    2003-01-01

    The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.

  13. Near threshold computing technology, methods and applications

    CERN Document Server

    Silvano, Cristina

    2016-01-01

    This book explores near-threshold computing (NTC), a design-space using techniques to run digital chips (processors) near the lowest possible voltage.  Readers will be enabled with specific techniques to design chips that are extremely robust; tolerating variability and resilient against errors.  Variability-aware voltage and frequency allocation schemes will be presented that will provide performance guarantees, when moving toward near-threshold manycore chips.  ·         Provides an introduction to near-threshold computing, enabling reader with a variety of tools to face the challenges of the power/utilization wall; ·         Demonstrates how to design efficient voltage regulation, so that each region of the chip can operate at the most efficient voltage and frequency point; ·         Investigates how performance guarantees can be ensured when moving towards NTC manycores through variability-aware voltage and frequency allocation schemes.  .

  14. Computational methods for aerodynamic design using numerical optimization

    Science.gov (United States)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  15. Advanced Measuring (Instrumentation Methods for Nuclear Installations: A Review

    Directory of Open Access Journals (Sweden)

    Wang Qiu-kuan

    2012-01-01

    Full Text Available The nuclear technology has been widely used in the world. The research of measurement in nuclear installations involves many aspects, such as nuclear reactors, nuclear fuel cycle, safety and security, nuclear accident, after action, analysis, and environmental applications. In last decades, many advanced measuring devices and techniques have been widely applied in nuclear installations. This paper mainly introduces the development of the measuring (instrumentation methods for nuclear installations and the applications of these instruments and methods.

  16. SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA

    International Nuclear Information System (INIS)

    The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation

  17. Analytical and computational methods in electromagnetics

    CERN Document Server

    Garg, Ramesh

    2008-01-01

    This authoritative resource offers you clear and complete explanation of this essential electromagnetics knowledge, providing you with the analytical background you need to understand such key approaches as MoM (method of moments), FDTD (Finite Difference Time Domain) and FEM (Finite Element Method), and Green's functions. This comprehensive book includes all math necessary to master the material.

  18. Basic Methods for Computing Special Functions

    NARCIS (Netherlands)

    Gil, A.; Segura, J.; Temme, N.M.; Simos, T.E.

    2011-01-01

    This paper gives an overview of methods for the numerical evaluation of special functions, that is, the functions that arise in many problems from mathematical physics, engineering, probability theory, and other applied sciences. We consider in detail a selection of basic methods which are frequent

  19. Higher geometry an introduction to advanced methods in analytic geometry

    CERN Document Server

    Woods, Frederick S

    2005-01-01

    For students of mathematics with a sound background in analytic geometry and some knowledge of determinants, this volume has long been among the best available expositions of advanced work on projective and algebraic geometry. Developed from Professor Woods' lectures at the Massachusetts Institute of Technology, it bridges the gap between intermediate studies in the field and highly specialized works.With exceptional thoroughness, it presents the most important general concepts and methods of advanced algebraic geometry (as distinguished from differential geometry). It offers a thorough study

  20. 12 CFR 227.25 - Unfair balance computation method.

    Science.gov (United States)

    2010-01-01

    ... under 12 CFR 226.12 or 12 CFR 226.13; or (2) Adjustments to finance charges as a result of the return of... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in...

  1. Computational Intelligence Characterization Method of Semiconductor Device

    CERN Document Server

    Liau, Eric

    2011-01-01

    Characterization of semiconductor devices is used to gather as much data about the device as possible to determine weaknesses in design or trends in the manufacturing process. In this paper, we propose a novel multiple trip point characterization concept to overcome the constraint of single trip point concept in device characterization phase. In addition, we use computational intelligence techniques (e.g. neural network, fuzzy and genetic algorithm) to further manipulate these sets of multiple trip point values and tests based on semiconductor test equipments, Our experimental results demonstrate an excellent design parameter variation analysis in device characterization phase, as well as detection of a set of worst case tests that can provoke the worst case variation, while traditional approach was not capable of detecting them.

  2. Computational Methods for Modification of Metabolic Networks

    Directory of Open Access Journals (Sweden)

    Takeyuki Tamura

    2015-01-01

    Full Text Available In metabolic engineering, modification of metabolic networks is an important biotechnology and a challenging computational task. In the metabolic network modification, we should modify metabolic networks by newly adding enzymes or/and knocking-out genes to maximize the biomass production with minimum side-effect. In this mini-review, we briefly review constraint-based formalizations for Minimum Reaction Cut (MRC problem where the minimum set of reactions is deleted so that the target compound becomes non-producible from the view point of the flux balance analysis (FBA, elementary mode (EM, and Boolean models. Minimum Reaction Insertion (MRI problem where the minimum set of reactions is added so that the target compound newly becomes producible is also explained with a similar formalization approach. The relation between the accuracy of the models and the risk of overfitting is also discussed.

  3. Viscous-Inviscid Coupling Methods for Advanced Marine Propeller Applications

    OpenAIRE

    Martin Greve; Katja Wöckner-Kluwe; Moustafa Abdel-Maksoud; Thomas Rung

    2012-01-01

    The paper reports the development of coupling strategies between an inviscid direct panel method and a viscous RANS method and their application to complex propeller ows. The work is motivated by the prohibitive computational cost associated to unsteady viscous flow simulations using geometrically resolved propellers to analyse the dynamics of ships in seaways. The present effort aims to combine the advantages of the two baseline methods in order to reduce the numerical effort without comprom...

  4. Proceedings: Workshop on advanced mathematics and computer science for power systems analysis

    Energy Technology Data Exchange (ETDEWEB)

    Esselman, W.H.; Iveson, R.H. (Electric Power Research Inst., Palo Alto, CA (United States))

    1991-08-01

    The Mathematics and Computer Workshop on Power System Analysis was held February 21--22, 1989, in Palo Alto, California. The workshop was the first in a series sponsored by EPRI's Office of Exploratory Research as part of its effort to develop ways in which recent advances in mathematics and computer science can be applied to the problems of the electric utility industry. The purpose of this workshop was to identify research objectives in the field of advanced computational algorithms needed for the application of advanced parallel processing architecture to problems of power system control and operation. Approximately 35 participants heard six presentations on power flow problems, transient stability, power system control, electromagnetic transients, user-machine interfaces, and database management. In the discussions that followed, participants identified five areas warranting further investigation: system load flow analysis, transient power and voltage analysis, structural instability and bifurcation, control systems design, and proximity to instability. 63 refs.

  5. Advanced non-destructive methods for an efficient service performance

    International Nuclear Information System (INIS)

    Due to the power generation industry's desire to decrease outage time and extend inspection intervals for highly stressed turbine parts, advanced and reliable Non-destructive methods were developed by Siemens Non-destructive laboratory. Effective outage performance requires the optimized planning of all outage activities as well as modern Non-destructive examination methods, in order to examine the highly stressed components (turbine rotor, casings, valves, generator rotor) reliably and in short periods of access. This paper describes the experience of Siemens Energy with an ultrasonic Phased Array inspection technique for the inspection of radial entry pinned turbine blade roots. The developed inspection technique allows the ultrasonic inspection of steam turbine blades without blade removal. Furthermore advanced Non-destructive examination methods for joint bolts will be described, which offer a significant reduction of outage duration in comparison to conventional inspection techniques. (authors)

  6. COMSAC: Computational Methods for Stability and Control. Part 1

    Science.gov (United States)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

  7. An Improved SIMPLEC Method for Steady and Unsteady Flow Computations

    DEFF Research Database (Denmark)

    Shen, Wen Zhong; Michelsen, Jess; Sørensen, N. N.;

    2003-01-01

    , whereas the solutions of unsteady computations for small time steps are polluted by unphysical wiggles. A revised scheme is proposed that extends the capability of th SIMPLEC method to cope with collocated grids in a general and consistent way.The efficiency of the new scheme is demonstrated by computing......A modified SIMPLEC scheme for flow computations on collocated grids has been developed. It is demonstrated that the standard SIMPLEC scheme (1) is inconsistent when applied on collocated grids. Hence, for steady computations the computed solution depends on the velocity underrelaxation parameter fu...

  8. Parallel Computing Methods For Particle Accelerator Design

    CERN Document Server

    Popescu, Diana Andreea; Hersch, Roger

    We present methods for parallelizing the transport map construction for multi-core processors and for Graphics Processing Units (GPUs). We provide an efficient implementation of the transport map construction. We describe a method for multi-core processors using the OpenMP framework which brings performance improvement over the serial version of the map construction. We developed a novel and efficient algorithm for multivariate polynomial multiplication for GPUs and we implemented it using the CUDA framework. We show the benefits of using the multivariate polynomial multiplication algorithm for GPUs in the map composition operation for high orders. Finally, we present an algorithm for map composition for GPUs.

  9. Recent advances in radial basis function collocation methods

    CERN Document Server

    Chen, Wen; Chen, C S

    2014-01-01

    This book surveys the latest advances in radial basis function (RBF) meshless collocation methods which emphasis on recent novel kernel RBFs and new numerical schemes for solving partial differential equations. The RBF collocation methods are inherently free of integration and mesh, and avoid tedious mesh generation involved in standard finite element and boundary element methods. This book focuses primarily on the numerical algorithms, engineering applications, and highlights a large class of novel boundary-type RBF meshless collocation methods. These methods have shown a clear edge over the traditional numerical techniques especially for problems involving infinite domain, moving boundary, thin-walled structures, and inverse problems. Due to the rapid development in RBF meshless collocation methods, there is a need to summarize all these new materials so that they are available to scientists, engineers, and graduate students who are interest to apply these newly developed methods for solving real world’s ...

  10. [Isolation and identification methods of enterobacteria group and its technological advancement].

    Science.gov (United States)

    Furuta, Itaru

    2007-08-01

    In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.

  11. Advanced Personnel Vetting Techniques in Critical Multi-Tennant Hosted Computing Environments

    Directory of Open Access Journals (Sweden)

    Farhan Hyder Sahito

    2013-06-01

    Full Text Available The emergence of cloud computing presents a strategic direction for critical infrastructures and promises to have far-reaching effects on their systems and networks to deliver better outcomes to the nations at a lower cost. However, when considering cloud computing, government entities must address a host of security issues (such as malicious insiders beyond those of service cost and flexibility. The scope and objective of this paper is to analyze, evaluate and investigate the insider threat in cloud security in sensitive infrastructures as well as to propose two proactive socio-technical solutions for securing commercial and governmental cloud infrastructures. Firstly, it proposes actionable framework, techniques and practices in order to ensure that such disruptions through human threats are infrequent, of minimal duration, manageable, and cause the least damage possible. Secondly, it aims for extreme security measures to analyze and evaluate human threats related assessment methods for employee screening in certain high-risk situations using cognitive analysis technology, in particular functional Magnetic Resonance Imaging (fMRI. The significance of this research is also to counter human rights and ethical dilemmas by presenting a set of ethical and professional guidelines. The main objective of this work is to analyze related risks, identify countermeasures and present recommendations to develop a security awareness culture that will allow cloud providers to utilize effectively the benefits of this advanced techniques without sacrificing system security.

  12. Data analysis of asymmetric structures advanced approaches in computational statistics

    CERN Document Server

    Saito, Takayuki

    2004-01-01

    Data Analysis of Asymmetric Structures provides a comprehensive presentation of a variety of models and theories for the analysis of asymmetry and its applications and provides a wealth of new approaches in every section. It meets both the practical and theoretical needs of research professionals across a wide range of disciplines and  considers data analysis in fields such as psychology, sociology, social science, ecology, and marketing. In seven comprehensive chapters this guide details theories, methods, and models for the analysis of asymmetric structures in a variety of disciplines and presents future opportunities and challenges affecting research developments and business applications.

  13. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  14. Advanced Finite Element Method for Nano-Resonators

    CERN Document Server

    Zschiedrich, L; Kettner, B; Schmidt, F

    2006-01-01

    Miniaturized optical resonators with spatial dimensions of the order of the wavelength of the trapped light offer prospects for a variety of new applications like quantum processing or construction of meta-materials. Light propagation in these structures is modelled by Maxwell's equations. For a deeper numerical analysis one may compute the scattered field when the structure is illuminated or one may compute the resonances of the structure. We therefore address in this paper the electromagnetic scattering problem as well as the computation of resonances in an open system. For the simulation efficient and reliable numerical methods are required which cope with the infinite domain. We use transparent boundary conditions based on the Perfectly Matched Layer Method (PML) combined with a novel adaptive strategy to determine optimal discretization parameters like the thickness of the sponge layer or the mesh width. Further a novel iterative solver for time-harmonic Maxwell's equations is presented.

  15. NATO Advanced Study Institute on Advances in the Computer Simulations of Liquid Crystals

    CERN Document Server

    Zannoni, Claudio

    2000-01-01

    Computer simulations provide an essential set of tools for understanding the macroscopic properties of liquid crystals and of their phase transitions in terms of molecular models. While simulations of liquid crystals are based on the same general Monte Carlo and molecular dynamics techniques as are used for other fluids, they present a number of specific problems and peculiarities connected to the intrinsic properties of these mesophases. The field of computer simulations of anisotropic fluids is interdisciplinary and is evolving very rapidly. The present volume covers a variety of techniques and model systems, from lattices to hard particle and Gay-Berne to atomistic, for thermotropics, lyotropics, and some biologically interesting liquid crystals. Contributions are written by an excellent panel of international lecturers and provides a timely account of the techniques and problems in the field.

  16. Computational stress analysis using finite volume methods

    OpenAIRE

    Fallah, Nosrat Allah

    2000-01-01

    There is a growing interest in applying finite volume methods to model solid mechanics problems and multi-physics phenomena. During the last ten years an increasing amount of activity has taken place in this area. Unlike the finite element formulation, which generally involves volume integrals, the finite volume formulation transfers volume integrals to surface integrals using the divergence theorem. This transformation for convection and diffusion terms in the governing equations, ensures...

  17. Electromagnetic computation methods for lightning surge protection studies

    CERN Document Server

    Baba, Yoshihiro

    2016-01-01

    This book is the first to consolidate current research and to examine the theories of electromagnetic computation methods in relation to lightning surge protection. The authors introduce and compare existing electromagnetic computation methods such as the method of moments (MOM), the partial element equivalent circuit (PEEC), the finite element method (FEM), the transmission-line modeling (TLM) method, and the finite-difference time-domain (FDTD) method. The application of FDTD method to lightning protection studies is a topic that has matured through many practical applications in the past decade, and the authors explain the derivation of Maxwell's equations required by the FDTD, and modeling of various electrical components needed in computing lightning electromagnetic fields and surges with the FDTD method. The book describes the application of FDTD method to current and emerging problems of lightning surge protection of continuously more complex installations, particularly in critical infrastructures of e...

  18. Computing Method of Forces on Rivet

    Directory of Open Access Journals (Sweden)

    Ion DIMA

    2014-03-01

    Full Text Available This article aims to provide a quick methodology of forces calculation on rivet in single shear using the finite element method (FEM – NASTRAN/PATRAN. These forces can be used for the calculus of bearing, inter rivet buckling and riveting check. For this method to be efficient and fast, a macro has been developed based on this methodology described in the article. The macro was wrote in Visual Basic with Excel interface. In the beginning phase of any aircraft project, when the rivets type and position are not yet precisely known, the modelling of rivets, as attachment elements between items, is made node on node in the finite element model, without taking account of the rivets position. Although the rivets are not modelled in the finite element model, this method together with the macro enable a quick extraction and calculation of the forces on the rivet. This calculation of forces on rivet is intended to critical case, selected from the stress plots of NASTRAN for max. /min. principal stress and shear.

  19. Statistical and Computational Methods for Genetic Diseases: An Overview

    OpenAIRE

    Francesco Camastra; Maria Donata Di Taranto; Antonino Staiano

    2015-01-01

    The identification of causes of genetic diseases has been carried out by several approaches with increasing complexity. Innovation of genetic methodologies leads to the production of large amounts of data that needs the support of statistical and computational methods to be correctly processed. The aim of the paper is to provide an overview of statistical and computational methods paying attention to methods for the sequence analysis and complex diseases.

  20. A Parallel Iterative Method for Computing Molecular Absorption Spectra

    OpenAIRE

    Koval, Peter; Foerster, Dietrich; Coulaud, Olivier

    2010-01-01

    We describe a fast parallel iterative method for computing molecular absorption spectra within TDDFT linear response and using the LCAO method. We use a local basis of "dominant products" to parametrize the space of orbital products that occur in the LCAO approach. In this basis, the dynamical polarizability is computed iteratively within an appropriate Krylov subspace. The iterative procedure uses a a matrix-free GMRES method to determine the (interacting) density response. The resulting cod...

  1. Good practices in development of advanced assembly/core calculation methods and implementations of AEGIS/SCOPE2

    International Nuclear Information System (INIS)

    This paper reviews the history of development of AEGIS/SCOPE2, an advanced in-core fuel management code for PWRs. The initial project, development of a proto-type code, was started in 1996 as a feasibility study of the advanced calculation method/algorithm for advanced computation environments such as distributed parallel computers like PC-clusters which are commonly used nowadays. With success of development of the prototype code, a production-level advanced core calculation code, SCOPE2, was developed followed by AEGIS, an advanced assembly calculation code. These codes have been developed on the basis of the object-oriented programming approach and the agile software development. The authors extracted the key factors for success of the project as good practices from the viewpoint of code design, implementation, project management and verification and validation. Those practices are universal and may be applicable to any projects in the future. (author)

  2. Advanced symbolic analysis for VLSI systems methods and applications

    CERN Document Server

    Shi, Guoyong; Tlelo Cuautle, Esteban

    2014-01-01

    This book provides comprehensive coverage of the recent advances in symbolic analysis techniques for design automation of nanometer VLSI systems. The presentation is organized in parts of fundamentals, basic implementation methods and applications for VLSI design. Topics emphasized include  statistical timing and crosstalk analysis, statistical and parallel analysis, performance bound analysis and behavioral modeling for analog integrated circuits . Among the recent advances, the Binary Decision Diagram (BDD) based approaches are studied in depth. The BDD-based hierarchical symbolic analysis approaches, have essentially broken the analog circuit size barrier. In particular, this book   • Provides an overview of classical symbolic analysis methods and a comprehensive presentation on the modern  BDD-based symbolic analysis techniques; • Describes detailed implementation strategies for BDD-based algorithms, including the principles of zero-suppression, variable ordering and canonical reduction; • Int...

  3. Current advances in diagnostic methods of Acanthamoeba keratitis

    Institute of Scientific and Technical Information of China (English)

    Wang Yuehua; Feng Xianmin; Jiang Linzhe

    2014-01-01

    Objective The objective of this article was to review the current advances in diagnostic methods for Acanthamoeba keratitis (AK).Data sources Data used in this review were retrieved from PubMed (1970-2013).The terms "Acanthamoeba keratitis" and "diagnosis" were used for the literature search.Study selection Data from published articles regarding AK and diagnosis in clinical trials were identified and reviewed.Results The diagnostic methods for the eight species implicated in AK were reviewed.Among all diagnostic procedures,corneal scraping and smear examination was an essential diagnostic method.Polymerase chain reaction was the most sensitive and accurate detection method.Culturing of Acanthamoeba was a reliable method for final diagnosis of AK.Confocal microscopy to detect Acanthamoeba was also effective,without any invasive procedure,and was helpful in the early diagnosis of AK.Conclusion Clinically,conjunction of various diagnostic methods to diagnose AK was necessary.

  4. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  5. Advances and Computational Tools towards Predictable Design in Biological Engineering

    Directory of Open Access Journals (Sweden)

    Lorenzo Pasotti

    2014-01-01

    Full Text Available The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated.

  6. Advanced Regression Methods in Finance and Economics: Three Essays

    OpenAIRE

    Hofmarcher, Paul

    2012-01-01

    In this thesis advanced regression methods are applied to discuss and investigate highly relevant research questions in the areas of finance and economics. In the field of credit risk the thesis investigates a hierarchical model which allows to obtain a consensus score, if several ratings are available for each firm. Autoregressive processes and random effects are used to model both a correlation structure between and within the obligors in the sample. The model also allows to validate ...

  7. Multiscale methods for computational RNA enzymology

    Science.gov (United States)

    Panteva, Maria T.; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K.; Kuechler, Erich R.; Giambaşu, George M.; Lee, Tai-Sung; York, Darrin M.

    2016-01-01

    RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multi-dimensional “problem space” of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues and bond breaking/forming in the chemical steps of the reaction. The goal of this article is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics (MD) simulations, reference interaction site model (RISM) calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics (HREMD) and quantum mechanical/molecular mechanical (QM/MM) simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme (HDVr) and RNase A. PMID:25726472

  8. MODIFIED LEAST SQUARE METHOD ON COMPUTING DIRICHLET PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The singularity theory of dynamical systems is linked to the numerical computation of boundary value problems of differential equations. It turns out to be a modified least square method for a calculation of variational problem defined on Ck(Ω), in which the base functions are polynomials and the computation of problems is transferred to compute the coefficients of the base functions. The theoretical treatment and some simple examples are provided for understanding the modification procedure of the metho...

  9. Current advances in molecular, biochemical, and computational modeling analysis of microalgal triacylglycerol biosynthesis.

    Science.gov (United States)

    Lenka, Sangram K; Carbonaro, Nicole; Park, Rudolph; Miller, Stephen M; Thorpe, Ian; Li, Yantao

    2016-01-01

    Triacylglycerols (TAGs) are highly reduced energy storage molecules ideal for biodiesel production. Microalgal TAG biosynthesis has been studied extensively in recent years, both at the molecular level and systems level through experimental studies and computational modeling. However, discussions of the strategies and products of the experimental and modeling approaches are rarely integrated and summarized together in a way that promotes collaboration among modelers and biologists in this field. In this review, we outline advances toward understanding the cellular and molecular factors regulating TAG biosynthesis in unicellular microalgae with an emphasis on recent studies on rate-limiting steps in fatty acid and TAG synthesis, while also highlighting new insights obtained from the integration of multi-omics datasets with mathematical models. Computational methodologies such as kinetic modeling, metabolic flux analysis, and new variants of flux balance analysis are explained in detail. We discuss how these methods have been used to simulate algae growth and lipid metabolism in response to changing culture conditions and how they have been used in conjunction with experimental validations. Since emerging evidence indicates that TAG synthesis in microalgae operates through coordinated crosstalk between multiple pathways in diverse subcellular destinations including the endoplasmic reticulum and plastids, we discuss new experimental studies and models that incorporate these findings for discovering key regulatory checkpoints. Finally, we describe tools for genetic manipulation of microalgae and their potential for future rational algal strain design. This comprehensive review explores the potential synergistic impact of pathway analysis, computational approaches, and molecular genetic manipulation strategies on improving TAG production in microalgae.

  10. Current advances in molecular, biochemical, and computational modeling analysis of microalgal triacylglycerol biosynthesis.

    Science.gov (United States)

    Lenka, Sangram K; Carbonaro, Nicole; Park, Rudolph; Miller, Stephen M; Thorpe, Ian; Li, Yantao

    2016-01-01

    Triacylglycerols (TAGs) are highly reduced energy storage molecules ideal for biodiesel production. Microalgal TAG biosynthesis has been studied extensively in recent years, both at the molecular level and systems level through experimental studies and computational modeling. However, discussions of the strategies and products of the experimental and modeling approaches are rarely integrated and summarized together in a way that promotes collaboration among modelers and biologists in this field. In this review, we outline advances toward understanding the cellular and molecular factors regulating TAG biosynthesis in unicellular microalgae with an emphasis on recent studies on rate-limiting steps in fatty acid and TAG synthesis, while also highlighting new insights obtained from the integration of multi-omics datasets with mathematical models. Computational methodologies such as kinetic modeling, metabolic flux analysis, and new variants of flux balance analysis are explained in detail. We discuss how these methods have been used to simulate algae growth and lipid metabolism in response to changing culture conditions and how they have been used in conjunction with experimental validations. Since emerging evidence indicates that TAG synthesis in microalgae operates through coordinated crosstalk between multiple pathways in diverse subcellular destinations including the endoplasmic reticulum and plastids, we discuss new experimental studies and models that incorporate these findings for discovering key regulatory checkpoints. Finally, we describe tools for genetic manipulation of microalgae and their potential for future rational algal strain design. This comprehensive review explores the potential synergistic impact of pathway analysis, computational approaches, and molecular genetic manipulation strategies on improving TAG production in microalgae. PMID:27321475

  11. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-01

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods. PMID:26549429

  12. Computer systems and methods for visualizing data

    Science.gov (United States)

    Stolte, Chris; Hanrahan, Patrick

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  13. Computational Simulations and the Scientific Method

    Science.gov (United States)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  14. Mathematical and computational methods in nuclear physics

    International Nuclear Information System (INIS)

    The lectures, covering various aspects of the many-body problem in nuclei, review present knowledge and include some unpublished material as well. Bohigas and Giannoni discuss the fluctuation properties of spectra of many-body systems by means of random matrix theories, and the attempts to search for quantum mechanical manifestations of classical chaotic motion. The role of spectral distributions (expressed as explicit functions of the microscopic matrix elements of the Hamiltonian) in the statistical spectroscopy of nuclear systems is analyzed by French. Zucker, after a brief review of the theoretical basis of the shell model, discusses a reformulation of the theory of effective interactions and gives a survey of the linked cluster theory. Goeke's lectures center on the mean-field methods, particularly TDHF, used in the investigation of the large-amplitude nuclear collective motion, pointing out both the successes and failures of the theory

  15. Method to Compute CT System MTF

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-03

    The modulation transfer function (MTF) is the normalized spatial frequency representation of the point spread function (PSF) of the system. Point objects are hard to come by, so typically the PSF is determined by taking the numerical derivative of the system's response to an edge. This is the method we use, and we typically use it with cylindrical objects. Given a cylindrical object, we first put an active contour around it, as shown in Figure 1(a). The active contour lets us know where the boundary of the test object is. We next set a threshold (Figure 1(b)) and determine the center of mass of the above threshold voxels. For the purposes of determining the center of mass, each voxel is weighted identically (not by voxel value).

  16. Customizing computational methods for visual analytics with big data.

    Science.gov (United States)

    Choo, Jaegul; Park, Haesun

    2013-01-01

    The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data.

  17. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    Science.gov (United States)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  18. Computational methods for internal flows with emphasis on turbomachinery

    Science.gov (United States)

    Mcnally, W. D.; Sockol, P. M.

    1981-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  19. Review - Computational methods for internal flows with emphasis on turbomachinery

    Science.gov (United States)

    Mcnally, W. D.; Sockol, P. M.

    1985-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler approaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  20. A stoichiometric calibration method for dual energy computed tomography

    Science.gov (United States)

    Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

    2014-04-01

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic

  1. A stoichiometric calibration method for dual energy computed tomography

    International Nuclear Information System (INIS)

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111–24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED–EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a

  2. Information Fusion Methods in Computer Pan-vision System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Aiming at concrete tasks of information fusion in computer pan-vision (CPV) system, information fusion methods are studied thoroughly. Some research progresses are presented. Recognizing of vision testing object is realized by fusing vision information and non-vision auxiliary information, which contain recognition of material defects, intelligent robot's autonomous recognition for parts and computer to defect image understanding and recognition automatically.

  3. Computers-for-edu: An Advanced Business Application Programming (ABAP) Teaching Case

    Science.gov (United States)

    Boyle, Todd A.

    2007-01-01

    The "Computers-for-edu" case is designed to provide students with hands-on exposure to creating Advanced Business Application Programming (ABAP) reports and dialogue programs, as well as navigating various mySAP Enterprise Resource Planning (ERP) transactions needed by ABAP developers. The case requires students to apply a wide variety of ABAP…

  4. Advanced approaches to characterize the human intestinal microbiota by computational meta-analysis

    NARCIS (Netherlands)

    Nikkilä, J.; Vos, de W.M.

    2010-01-01

    GOALS: We describe advanced approaches for the computational meta-analysis of a collection of independent studies, including over 1000 phylogenetic array datasets, as a means to characterize the variability of human intestinal microbiota. BACKGROUND: The human intestinal microbiota is a complex micr

  5. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, Forest M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bochev, Pavel B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cameron-Smith, Philip J.. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Easter, Richard C [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Scott M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ghan, Steven J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Xiaohong [Univ. of Wyoming, Laramie, WY (United States); Lowrie, Robert B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lucas, Donald D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ma, Po-lun [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sacks, William J. [National Center for Atmospheric Research (NCAR), Boulder, CO (United States); Shrivastava, Manish [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Singh, Balwinder [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tautges, Timothy J. [Argonne National Lab. (ANL), Argonne, IL (United States); Taylor, Mark A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Vertenstein, Mariana [National Center for Atmospheric Research (NCAR), Boulder, CO (United States); Worley, Patrick H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-01-15

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  6. Digital spectral analysis parametric, non-parametric and advanced methods

    CERN Document Server

    Castanié, Francis

    2013-01-01

    Digital Spectral Analysis provides a single source that offers complete coverage of the spectral analysis domain. This self-contained work includes details on advanced topics that are usually presented in scattered sources throughout the literature.The theoretical principles necessary for the understanding of spectral analysis are discussed in the first four chapters: fundamentals, digital signal processing, estimation in spectral analysis, and time-series models.An entire chapter is devoted to the non-parametric methods most widely used in industry.High resolution methods a

  7. A first attempt to bring computational biology into advanced high school biology classrooms.

    Directory of Open Access Journals (Sweden)

    Suzanne Renick Gallagher

    2011-10-01

    Full Text Available Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  8. Computational methods in natural product chemistry

    International Nuclear Information System (INIS)

    Molecular mechanics and density functional theory methods have been applied on variety of natural products for the structural characterization especially for the determination of absolute configuration (AC). First, time dependent density functional theory using three density functionals (B3PW91 , PBE0 , B3LYP) and three basis sets [TZVP, SVP, 6-31G (d)] were applied on the four possible configurations [(R,R), (R,S), (S,R), (S,S)] of loliolides to predict the chiroptic properties. Based on the result of these studies B3PW91/TZVP was selected for the successful assignment of AC of 10 eremophilane sesquiterpenes from Petasites hybridus by comparison of experimental and calculated circular dichroism spectra. An important result of these investigation was the observation of a concentration dependence of both CD and NMR spectra of certain derivatives which indicates that simply relying on the experimental determination of the CD spectra at a single concentration could be misleading. B3PW91/TZVP was also successfully applied on 14 derivatives of 4-methyl-1,3,4,5-tetrahydro-2H-1,5-benzodiazepin-2-one to determine the AC as R and [B3LYP/6-31G(d)] was used to calculate interconversion barriers between conformation of benzodiazepins. A comparison of Auto-, DHPLcy2k and DCXplorer calculated thermodynamic data generated by DHPLC with theoretically calculated data was also performed for four axially chiral biscarbostyrils. For cpd2 both programs delivered similar values of 90 and 93 kJ/mol which are in good agreement with the calculated [B3LYP/6-31G (d)] value, G = 99 kJ/mol. (author)

  9. Building an advanced climate model: Program plan for the CHAMMP (Computer Hardware, Advanced Mathematics, and Model Physics) Climate Modeling Program

    Energy Technology Data Exchange (ETDEWEB)

    1990-12-01

    The issue of global warming and related climatic changes from increasing concentrations of greenhouse gases in the atmosphere has received prominent attention during the past few years. The Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP) Climate Modeling Program is designed to contribute directly to this rapid improvement. The goal of the CHAMMP Climate Modeling Program is to develop, verify, and apply a new generation of climate models within a coordinated framework that incorporates the best available scientific and numerical approaches to represent physical, biogeochemical, and ecological processes, that fully utilizes the hardware and software capabilities of new computer architectures, that probes the limits of climate predictability, and finally that can be used to address the challenging problem of understanding the greenhouse climate issue through the ability of the models to simulate time-dependent climatic changes over extended times and with regional resolution.

  10. Computation of saddle-type slow manifolds using iterative methods

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2015-01-01

    This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...... with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the...

  11. Recent advances in neutral particle transport methods and codes

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Y.Y.

    1996-06-01

    An overview of ORNL`s three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned.

  12. Methods for recovering multiplex computer-synthesized fourier microholograms

    International Nuclear Information System (INIS)

    A new type of the holographic memory based on the computer synthesis of Fourier holograms of data pages has been considered. Methods for recovering multiplexed one-dimensional structures have been proposed

  13. Computing the crystal growth rate by the interface pinning method

    DEFF Research Database (Denmark)

    Pedersen, Ulf Rørbæk; Hummel, Felix; Dellago, Christoph

    2015-01-01

    An essential parameter for crystal growth is the kinetic coefficient given by the proportionality between supercooling and average growth velocity. Here, we show that this coefficient can be computed in a single equilibrium simulation using the interface pinning method where two-phase configurati......An essential parameter for crystal growth is the kinetic coefficient given by the proportionality between supercooling and average growth velocity. Here, we show that this coefficient can be computed in a single equilibrium simulation using the interface pinning method where two...... is investigated in detail for the Lennard-Jones model. We find that the kinetic coefficient scales as the inverse square-root of temperature along the high temperature part of the melting line. The practical usability of the method is demonstrated by computing the kinetic coefficient of the elements...... Na and Si from first principles. A generalized version of the method may be used for computing the rates of crystal nucleation or other rare events....

  14. Platform-independent method for computer aided schematic drawings

    Energy Technology Data Exchange (ETDEWEB)

    Vell, Jeffrey L. (Slingerlands, NY); Siganporia, Darius M. (Clifton Park, NY); Levy, Arthur J. (Fort Lauderdale, FL)

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  15. FY05-FY06 Advanced Simulation and Computing Implementation Plan, Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Baron, A L

    2004-07-19

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

  16. Innovations and advances in computing, informatics, systems sciences, networking and engineering

    CERN Document Server

    Elleithy, Khaled

    2015-01-01

    Innovations and Advances in Computing, Informatics, Systems Sciences, Networking and Engineering  This book includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Informatics, and Systems Sciences, and Engineering. It includes selected papers from the conference proceedings of the Eighth and some selected papers of the Ninth International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2012 & CISSE 2013). Coverage includes topics in: Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.  ·       Provides the latest in a series of books growing out of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering; ·       Includes chapters in the most a...

  17. Advanced thermal hydraulic method using 3x3 pin modeling

    International Nuclear Information System (INIS)

    Advanced thermal hydraulic methods are being developed as part of the US DOE sponsored Nuclear Hub program called CASL (Consortium for Advanced Simulation of LWRs). One of the key objectives of the Hub program is to develop a multi-physics tool which evaluates neutronic, thermal hydraulic, structural mechanics and nuclear fuel rod performance in rod bundles to support power uprates, increased burnup/cycle length and life extension for US nuclear plants. Current design analysis tools are separate and applied in series using simplistic models and conservatisms in the analysis. In order to achieve key Nuclear Hub objectives a higher fidelity, multi-physics tool is needed to address the challenge problems that limit current reactor performance. This paper summarizes the preliminary development of a multi-physics tool by performing 3x3 pin modeling and making comparisons to available data. (author)

  18. 1st International Conference on Computational Advancement in Communication Circuits and Systems

    CERN Document Server

    Dalapati, Goutam; Banerjee, P; Mallick, Amiya; Mukherjee, Moumita

    2015-01-01

    This book comprises the proceedings of 1st International Conference on Computational Advancement in Communication Circuits and Systems (ICCACCS 2014) organized by Narula Institute of Technology under the patronage of JIS group, affiliated to West Bengal University of Technology. The conference was supported by Technical Education Quality Improvement Program (TEQIP), New Delhi, India and had technical collaboration with IEEE Kolkata Section, along with publication partner by Springer. The book contains 62 refereed papers that aim to highlight new theoretical and experimental findings in the field of Electronics and communication engineering including interdisciplinary fields like Advanced Computing, Pattern Recognition and Analysis, Signal and Image Processing. The proceedings cover the principles, techniques and applications in microwave & devices, communication & networking, signal & image processing, and computations & mathematics & control. The proceedings reflect the conference’s emp...

  19. A lightweight method for computing ball spin in real time

    OpenAIRE

    Cristina, Federico; Dapoto, Sebastián H.; Russo, Claudia Cecilia

    2007-01-01

    The present paper poses a new method for computing the rotation of a ball in sport training situations, when the ball is approaching the goal line. The proposed method significantly reduces the hardware requirements associated to the capture, as well as the computational complexity necessary to obtain the results. The system's objective is to improve the player's technique and training methodology, and it is treated within the scope of the Institute's research line on signal and image process...

  20. Panel-Method Computer Code For Potential Flow

    Science.gov (United States)

    Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steven K.

    1992-01-01

    Low-order panel method used to reduce computation time. Panel code PMARC (Panel Method Ames Research Center) numerically simulates flow field around or through complex three-dimensional bodies such as complete aircraft models or wind tunnel. Based on potential-flow theory. Facilitates addition of new features to code and tailoring of code to specific problems and computer-hardware constraints. Written in standard FORTRAN 77.

  1. Moving finite elements: A continuously adaptive method for computational fluid dynamics

    International Nuclear Information System (INIS)

    Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor fabrication, biophysics, automobile and aircraft design, industrial fluid processing, chemical engineering, and combustion research. The improvements made possible by the new method could thus have substantial economic impact. Moving Finite Elements is a moving node adaptive grid method which has a tendency to pack the grid finely in regions where it is most needed at each time and to leave it coarse elsewhere. It does so in a manner which is simple and automatic, and does not require a large amount of human ingenuity to apply it to each particular problem. At the same time, it often allows the time step to be large enough to advance a moving shock by many shock thicknesses in a single time step, moving the grid smoothly with the solution and minimizing the number of time steps required for the whole problem. For 2D problems (two spatial variables) the grid is composed of irregularly shaped and irregularly connected triangles which are very flexible in their ability to adapt to the evolving solution. While other adaptive grid methods have been developed which share some of these desirable properties, this is the only method which combines them all. In many cases, the method can save orders of magnitude of computing time, equivalent to several generations of advancing computer hardware

  2. Fibonacci’s Computation Methods vs Modern Algorithms

    Directory of Open Access Journals (Sweden)

    Ernesto Burattini

    2013-12-01

    Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.

  3. Optimal Joint Multiple Resource Allocation Method for Cloud Computing Environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2011-01-01

    Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources. To provide cloud computing services economically, it is important to optimize resource allocation under the assumption that the required resource can be taken from a shared resource pool. In addition, to be able to provide processing ability and storage capacity, it is necessary to allocate bandwidth to access them at the same time. This paper proposes an optimal resource allocation method for cloud computing environments. First, this paper develops a resource allocation model of cloud computing environments, assuming both processing ability and bandwidth are allocated simultaneously to each service request and rented out on an hourly basis. The allocated resources are dedicated to each service request. Next, this paper proposes an optimal joint multiple resource allocation method, based on the above resource allocation model. It is demonstrated by simulation evaluation that the p...

  4. Novel computational biology methods and their applications to drug discovery

    Institute of Scientific and Technical Information of China (English)

    Sharangdhar S. PHATAK; Hoang T. TRAN; Shuxing ZHANG

    2011-01-01

    Computational biology methods are now firmly entrenched in the drug discovery process.These methods focus on modeling and simulations of biological systems to complement and direct conventional experimental approaches.Two important branches of computational biology include protein homology modeling and the computational biophysics method of molecular dynamics.Protein modeling methods attempt to accurately predict three-dimensional (3D) structures of uncrystallized proteins for subsequent structure-based drug design applications.Molecular dynamics methods aim to elucidate the molecular motions of the static representations of crystallized protein structures.In this review we highlight recent novel methodologies in the field of homology modeling and molecular dynamics.Selected drug discovery applications using these methods conclude the review.

  5. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  6. Using boundary methods to compute the Casimir energy

    CERN Document Server

    Lombardo, F C; Villar, P I

    2010-01-01

    We discuss new approaches to compute numerically the Casimir interaction energy for waveguides of arbitrary section, based on the boundary methods traditionally used to compute eigenvalues of the 2D Helmholtz equation. These methods are combined with the Cauchy's theorem in order to perform the sum over modes. As an illustration, we describe a point-matching technique to compute the vacuum energy for waveguides containing media with different permittivities. We present explicit numerical evaluations for perfect conducting surfaces in the case of concentric corrugated cylinders and a circular cylinder inside an elliptic one.

  7. Methods and advances in the study of aeroelasticity with uncertainties

    Institute of Scientific and Technical Information of China (English)

    Dai Yuting; Yang Chao

    2014-01-01

    Uncertainties denote the operators which describe data error, numerical error and model error in the mathematical methods. The study of aeroelasticity with uncertainty embedded in the subsystems, such as the uncertainty in the modeling of structures and aerodynamics, has been a hot topic in the last decades. In this paper, advances of the analysis and design in aeroelasticity with uncertainty are summarized in detail. According to the non-probabilistic or probabilistic uncer-tainty, the developments of theories, methods and experiments with application to both robust and probabilistic aeroelasticity analysis are presented, respectively. In addition, the advances in aeroelastic design considering either probabilistic or non-probabilistic uncertainties are introduced along with aeroelastic analysis. This review focuses on the robust aeroelasticity study based on the structured singular value method, namely the l method. It covers the numerical calculation algo-rithm of the structured singular value, uncertainty model construction, robust aeroelastic stability analysis algorithms, uncertainty level verification, and robust flutter boundary prediction in the flight test, etc. The key results and conclusions are explored. Finally, several promising problems on aeroelasticity with uncertainty are proposed for future investigation.

  8. Towards Qualitative Computer Science Education: Engendering Effective Teaching Methods

    Directory of Open Access Journals (Sweden)

    Basirat A. Adenowo

    2013-09-01

    Full Text Available An investigation into the teaching method(s that can effectively yield qualitative computer science education in Basic Schools becomes necessary due to the Nigerian government policy on education. The government’s policy stipulates that every graduate of Basic Schools or UBE (Universal Basic education should be computer literate. This policy intends to ensure her citizens are ICT (Information and Communication Technology compliant. The foregoing thus necessitatesthe production of highly qualified manpower―grounded in computer knowledge―to implement the computer science education strand of the UBE curriculum. Accordingly, this research investigates the opinion of computer teacher-trainees on the teaching methods used while on training. Some of the teacher-trainees―that taught computer study while on teaching practice―were systematically sampled using “Purposive” sampling technique. The results show consensus in male and female teacher-trainees’ views; both gender agreed that all the teaching methods used, while on training, will engender effective teaching of computer study. On the whole, the mean performance ratings of male teacher-trainees were found to be higher than that of females. However, this is not in accord with the target set by Universal Basic Education Commission which intends to eliminate gender disparity in the UBE programme. The results thussuggestthe need for further investigation using larger sample.

  9. Computation of electron energy loss spectra by an iterative method

    Energy Technology Data Exchange (ETDEWEB)

    Koval, Peter [Donostia International Physics Center (DIPC), Paseo Manuel de Lardizabal 4, E-20018 San Sebastián (Spain); Centro de Física de Materiales CFM-MPC, Centro Mixto CSIC-UPV/EHU, Paseo Manuel de Lardizabal 5, E-20018 San Sebastián (Spain); Ljungberg, Mathias Per [Donostia International Physics Center (DIPC), Paseo Manuel de Lardizabal 4, E-20018 San Sebastián (Spain); Foerster, Dietrich [LOMA, Université de Bordeaux 1, 351 Cours de la Liberation, 33405 Talence (France); Sánchez-Portal, Daniel [Donostia International Physics Center (DIPC), Paseo Manuel de Lardizabal 4, E-20018 San Sebastián (Spain); Centro de Física de Materiales CFM-MPC, Centro Mixto CSIC-UPV/EHU, Paseo Manuel de Lardizabal 5, E-20018 San Sebastián (Spain)

    2015-07-01

    A method is presented to compute the dielectric function for extended systems using linear response time-dependent density functional theory. Localized basis functions with finite support are used to expand both eigenstates and response functions. The electron-energy loss function is directly obtained by an iterative Krylov-subspace method. We apply our method to graphene and silicon and compare it to plane-wave based approaches. Finally, we compute electron-energy loss spectrum of C{sub 60} crystal to demonstrate the merits of the method for molecular crystals, where it will be most competitive.

  10. Advanced Systems Biology Methods in Drug Discovery and Translational Biomedicine

    OpenAIRE

    Jun Zou; Ming-Wu Zheng; Gen Li; Zhi-Guang Su

    2013-01-01

    Systems biology is in an exponential development stage in recent years and has been widely utilized in biomedicine to better understand the molecular basis of human disease and the mechanism of drug action. Here, we discuss the fundamental concept of systems biology and its two computational methods that have been commonly used, that is, network analysis and dynamical modeling. The applications of systems biology in elucidating human disease are highlighted, consisting of human disease networ...

  11. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    Science.gov (United States)

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  12. Advances on methods for mapping QTL in plant

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yuan-Ming

    2006-01-01

    Advances on methods for mapping quantitative trait loci (QTL) are firstly summarized.Then, some new methods, including mapping multiple QTL, fine mapping of QTL, and mapping QTL for dynamic traits, are mainly described. Finally, some future prospects are proposed, including how to dig novel genes in the germplasm resource, map expression QTL (eQTL) by the use of all markers,phenotypes and micro-array data, identify QTL using genetic mating designs and detect viability loci. The purpose is to direct plant geneticists to choose a suitable method in the inheritance analysis of quantitative trait and in search of novel genes in germplasm resource so that more potential genetic information can be uncovered.

  13. Advances in product family and product platform design methods & applications

    CERN Document Server

    Jiao, Jianxin; Siddique, Zahed; Hölttä-Otto, Katja

    2014-01-01

    Advances in Product Family and Product Platform Design: Methods & Applications highlights recent advances that have been made to support product family and product platform design and successful applications in industry. This book provides not only motivation for product family and product platform design—the “why” and “when” of platforming—but also methods and tools to support the design and development of families of products based on shared platforms—the “what”, “how”, and “where” of platforming. It begins with an overview of recent product family design research to introduce readers to the breadth of the topic and progresses to more detailed topics and design theory to help designers, engineers, and project managers plan, architect, and implement platform-based product development strategies in their companies. This book also: Presents state-of-the-art methods and tools for product family and product platform design Adopts an integrated, systems view on product family and pro...

  14. Validity and reliability of a computer method to estimate vertebral axial rotation from digital radiographs

    OpenAIRE

    Pinheiro, Alan P.; Tanure, Michelle C.; Anamaria S. Oliveira

    2009-01-01

    Axial vertebral rotation, an important parameter in the assessment of scoliosis may be identified on X-ray images. In line with the advances in the field of digital radiography, hospitals have been increasingly using this technique. The objective of the present study was to evaluate the reliability of computer-processed rotation measurements obtained from digital radiographs. A software program was therefore developed, which is able to digitally reproduce the methods of Perdriolle and Raimond...

  15. Assessment of computational methods for predicting the effects of missense mutations in human cancers

    OpenAIRE

    Gnad, Florian; Baucom, Albion; Mukhyala, Kiran; Manning, Gerard; Zhang, Zemin

    2013-01-01

    Background Recent advances in sequencing technologies have greatly increased the identification of mutations in cancer genomes. However, it remains a significant challenge to identify cancer-driving mutations, since most observed missense changes are neutral passenger mutations. Various computational methods have been developed to predict the effects of amino acid substitutions on protein function and classify mutations as deleterious or benign. These include approaches that rely on evolution...

  16. Advanced Methods in Black-Hole Perturbation Theory

    CERN Document Server

    Pani, Paolo

    2013-01-01

    Black-hole perturbation theory is a useful tool to investigate issues in astrophysics, high-energy physics, and fundamental problems in gravity. It is often complementary to fully-fledged nonlinear evolutions and instrumental to interpret some results of numerical simulations. Several modern applications require advanced tools to investigate the linear dynamics of generic small perturbations around stationary black holes. Here, we present an overview of these applications and introduce extensions of the standard semianalytical methods to construct and solve the linearized field equations in curved spacetime. Current state-of-the-art techniques are pedagogically explained and exciting open problems are presented.

  17. Parallel-META 2.0: enhanced metagenomic data analysis with functional annotation, high performance computing and advanced visualization.

    Science.gov (United States)

    Su, Xiaoquan; Pan, Weihua; Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.

  18. Parallel-META 2.0: enhanced metagenomic data analysis with functional annotation, high performance computing and advanced visualization.

    Directory of Open Access Journals (Sweden)

    Xiaoquan Su

    Full Text Available The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample. With the advancement of Next Generation Sequencing (NGS techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.

  19. Advanced methods for fabrication of PHWR and LMFBR fuels

    International Nuclear Information System (INIS)

    For self-reliance in nuclear power, the Department of Atomic Energy (DAE), India is pursuing two specific reactor systems, namely the pressurised heavy water reactors (PHWR) and the liquid metal cooled fast breeder reactors (LMFBR). The reference fuel for PHWR is zircaloy-4 clad high density (≤ 96 per cent T.D.) natural UO2 pellet-pins. The advanced PHWR fuels are UO2-PuO2 (≤ 2 per cent), ThO2-PuO2 (≤ 4 per cent) and ThO2-U233O2 (≤ 2 per cent). Similarly, low density (≤ 85 per cent T.D.) (UPu)O2 pellets clad in SS 316 or D9 is the reference fuel for the first generation of prototype and commercial LMFBRs all over the world. However, (UPu)C and (UPu)N are considered as advanced fuels for LMFBRs mainly because of their shorter doubling time. The conventional method of fabrication of both high and low density oxide, carbide and nitride fuel pellets starting from UO2, PuO2 and ThO2 powders is 'powder metallurgy (P/M)'. The P/M route has, however, the disadvantage of generation and handling of fine powder particles of the fuel and the associated problem of 'radiotoxic dust hazard'. The present paper summarises the state-of-the-art of advanced methods of fabrication of oxide, carbide and nitride fuels and highlights the author's experience on sol-gel-microsphere-pelletisation (SGMP) route for preparation of these materials. The SGMP process uses sol gel derived, dust-free and free-flowing microspheres of oxides, carbide or nitride for direct pelletisation and sintering. Fuel pellets of both low and high density, excellent microhomogeneity and controlled 'open' or 'closed' porosity could be fabricated via the SGMP route. (author). 5 tables, 14 figs., 15 refs

  20. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  1. Fourier modal method and its applications in computational nanophotonics

    CERN Document Server

    Kim, Hwi

    2012-01-01

    Most available books on computational electrodynamics are focused on FDTD, FEM, or other specific technique developed in microwave engineering. In contrast, Fourier Modal Method and Its Applications in Computational Nanophotonics is a complete guide to the principles and detailed mathematics of the up-to-date Fourier modal method of optical analysis. It takes readers through the implementation of MATLAB(R) codes for practical modeling of well-known and promising nanophotonic structures. The authors also address the limitations of the Fourier modal method. Features Provides a comprehensive guid

  2. Fully consistent CFD methods for incompressible flow computations

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.;

    2014-01-01

    Nowadays collocated grid based CFD methods are one of the most e_cient tools for computations of the ows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure...

  3. Solution-adaptive finite element method in computational fracture mechanics

    Science.gov (United States)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  4. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  5. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Matzen, M. Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  6. Multigrid methods for the computation of propagators in gauge fields

    International Nuclear Information System (INIS)

    In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. We discuss proper averaging operations for bosons and for staggered fermions. An efficient algorithm for computing C numerically is presented. The averaging kernels C can be used not only in deterministic multigrid computations, but also in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies of gauge theories. Actual numerical computations of kernels and propagators are performed in compact four-dimensional SU(2) gauge fields. (orig./HSI)

  7. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    Energy Technology Data Exchange (ETDEWEB)

    Lucas, Robert [University of Southern California, Information Sciences Institute; Ang, James [Sandia National Laboratories; Bergman, Keren [Columbia University; Borkar, Shekhar [Intel; Carlson, William [Institute for Defense Analyses; Carrington, Laura [University of California, San Diego; Chiu, George [IBM; Colwell, Robert [DARPA; Dally, William [NVIDIA; Dongarra, Jack [University of Tennessee; Geist, Al [Oak Ridge National Laboratory; Haring, Rud [IBM; Hittinger, Jeffrey [Lawrence Livermore National Laboratory; Hoisie, Adolfy [Pacific Northwest National Laboratory; Klein, Dean Micron; Kogge, Peter [University of Notre Dame; Lethin, Richard [Reservoir Labs; Sarkar, Vivek [Rice University; Schreiber, Robert [Hewlett Packard; Shalf, John [Lawrence Berkeley National Laboratory; Sterling, Thomas [Indiana University; Stevens, Rick [Argonne National Laboratory; Bashor, Jon [Lawrence Berkeley National Laboratory; Brightwell, Ron [Sandia National Laboratories; Coteus, Paul [IBM; Debenedictus, Erik [Sandia National Laboratories; Hiller, Jon [Science and Technology Associates; Kim, K. H. [IBM; Langston, Harper [Reservoir Labs; Murphy, Richard Micron; Webster, Clayton [Oak Ridge National Laboratory; Wild, Stefan [Argonne National Laboratory; Grider, Gary [Los Alamos National Laboratory; Ross, Rob [Argonne National Laboratory; Leyffer, Sven [Argonne National Laboratory; Laros III, James [Sandia National Laboratories

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a system that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.

  8. Methods and Systems for Advanced Spaceport Information Management

    Science.gov (United States)

    Fussell, Ronald M. (Inventor); Ely, Donald W. (Inventor); Meier, Gary M. (Inventor); Halpin, Paul C. (Inventor); Meade, Phillip T. (Inventor); Jacobson, Craig A. (Inventor); Blackwell-Thompson, Charlie (Inventor)

    2007-01-01

    Advanced spaceport information management methods and systems are disclosed. In one embodiment, a method includes coupling a test system to the payload and transmitting one or more test signals that emulate an anticipated condition from the test system to the payload. One or more responsive signals are received from the payload into the test system and are analyzed to determine whether one or more of the responsive signals comprises an anomalous signal. At least one of the steps of transmitting, receiving, analyzing and determining includes transmitting at least one of the test signals and the responsive signals via a communications link from a payload processing facility to a remotely located facility. In one particular embodiment, the communications link is an Internet link from a payload processing facility to a remotely located facility (e.g. a launch facility, university, etc.).

  9. Continued rise of the cloud advances and trends in cloud computing

    CERN Document Server

    Mahmood, Zaigham

    2014-01-01

    Cloud computing is no-longer a novel paradigm, but instead an increasingly robust and established technology, yet new developments continue to emerge in this area. Continued Rise of the Cloud: Advances and Trends in Cloud Computing captures the state of the art in cloud technologies, infrastructures, and service delivery and deployment models. The book provides guidance and case studies on the development of cloud-based services and infrastructures from an international selection of expert researchers and practitioners. A careful analysis is provided of relevant theoretical frameworks, prac

  10. Advances in simulated modeling of vibration systems based on computational intelligence

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Computational intelligence is the computational simulation of the bio-intelligence, which includes artificial neural networks, fuzzy systems and evolutionary computations. This article summarizes the state of the art in the field of simulated modeling of vibration systems using methods of computational intelligence, based on some relevant subjects and the authors' own research work. First, contributions to the applications of computational intelligence to the identification of nonlinear characteristics of packaging are reviewed. Subsequently, applications of the newly developed training algorithms for feedforward neural networks to the identification of restoring forces in multi-degree-of-freedom nonlinear systems are discussed. Finally, the neural-network-based method of model reduction for the dynamic simulation of microelectromechanical systems (MEMS) using generalized Hebbian algorithm (GHA) and robust GHA is outlined. The prospects of the simulated modeling of vibration systems using techniques of computational intelligence are also indicated.

  11. A Comparison of Computational Aeroacoustic Prediction Methods for Transonic Rotor Noise

    Science.gov (United States)

    Brentner, Kenneth S.; Lyrintzis, Anastasios; Koutsavdis, Evangelos K.

    1996-01-01

    This paper compares two methods for predicting transonic rotor noise for helicopters in hover and forward flight. Both methods rely on a computational fluid dynamics (CFD) solution as input to predict the acoustic near and far fields. For this work, the same full-potential rotor code has been used to compute the CFD solution for both acoustic methods. The first method employs the acoustic analogy as embodied in the Ffowcs Williams-Hawkings (FW-H) equation, including the quadrupole term. The second method uses a rotating Kirchhoff formulation. Computed results from both methods are compared with one other and with experimental data for both hover and advancing rotor cases. The results are quite good for all cases tested. The sensitivity of both methods to CFD grid resolution and to the choice of the integration surface/volume is investigated. The computational requirements of both methods are comparable; in both cases these requirements are much less than the requirements for the CFD solution.

  12. Robotics, Stem Cells and Brain Computer Interfaces in Rehabilitation and Recovery from Stroke; Updates and Advances

    Science.gov (United States)

    Boninger, Michael L; Wechsler, Lawrence R.; Stein, Joel

    2014-01-01

    Objective To describe the current state and latest advances in robotics, stem cells, and brain computer interfaces in rehabilitation and recovery for stroke. Design The authors of this summary recently reviewed this work as part of a national presentation. The paper represents the information included in each area. Results Each area has seen great advances and challenges as products move to market and experiments are ongoing. Conclusion Robotics, stem cells, and brain computer interfaces all have tremendous potential to reduce disability and lead to better outcomes for patients with stroke. Continued research and investment will be needed as the field moves forward. With this investment, the potential for recovery of function is likely substantial PMID:25313662

  13. Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics

    International Nuclear Information System (INIS)

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)

  14. Advances in Single-Photon Emission Computed Tomography Hardware and Software.

    Science.gov (United States)

    Piccinelli, Marina; Garcia, Ernest V

    2016-02-01

    Nuclear imaging techniques remain today's most reliable modality for the assessment and quantification of myocardial perfusion. In recent years, the field has experienced tremendous progress both in terms of dedicated cameras for cardiac applications and software techniques for image reconstruction. The most recent advances in single-photon emission computed tomography hardware and software are reviewed, focusing on how these improvements have resulted in an even more powerful diagnostic tool with reduced injected radiation dose and acquisition time.

  15. The spectral-element method, Beowulf computing, and global seismology.

    Science.gov (United States)

    Komatitsch, Dimitri; Ritsema, Jeroen; Tromp, Jeroen

    2002-11-29

    The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables us to simulate broadband seismograms without intrinsic restrictions on the level of heterogeneity or the frequency content.

  16. A Brief Review of Computational Gene Prediction Methods

    Institute of Scientific and Technical Information of China (English)

    Zhuo Wang; Yazhu Chen; Yixue Li

    2004-01-01

    With the development of genome sequencing for many organisms, more and more raw sequences need to be annotated. Gene prediction by computational methods for finding the location of protein coding regions is one of the essential issues in bioinformatics. Two classes of methods are generally adopted: similarity based searches and ab initio prediction. Here, we review the development of gene prediction methods, summarize the measures for evaluating predictor quality, highlight open problems in this area, and discuss future research directions.

  17. Computational methods to obtain time optimal jet engine control

    Science.gov (United States)

    Basso, R. J.; Leake, R. J.

    1976-01-01

    Dynamic Programming and the Fletcher-Reeves Conjugate Gradient Method are two existing methods which can be applied to solve a general class of unconstrained fixed time, free right end optimal control problems. New techniques are developed to adapt these methods to solve a time optimal control problem with state variable and control constraints. Specifically, they are applied to compute a time optimal control for a jet engine control problem.

  18. Research Institute for Advanced Computer Science: Annual Report October 1998 through September 1999

    Science.gov (United States)

    Leiner, Barry M.; Gross, Anthony R. (Technical Monitor)

    1999-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center (ARC). It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. ARC has been designated NASA's Center of Excellence in Information Technology. In this capacity, ARC is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA ARC and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to

  19. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  20. Adjacency Matrix based method to compute the node connectivity of a Computer Communication Network

    CERN Document Server

    Kamalesh, V N

    2010-01-01

    Survivability of a computer communication network is the ability of a network to provide continuous service in the presence of link or node failures. It is an essential and considerable concern in the design of high speed communication network topologies. The connectivity number of a network is the graph theoretical metric to measure survivability of the communication network. Given a network and a positive integer k, few heuristics exist in literature to verify whether the given network is k connected or not. This paper presents a method to compute the connectivity number k of a given computer communication network.

  1. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  2. Advanced discretizations and multigrid methods for liquid crystal configurations

    Science.gov (United States)

    Emerson, David B.

    Liquid crystals are substances that possess mesophases with properties intermediate between liquids and crystals. Here, we consider nematic liquid crystals, which consist of rod-like molecules whose average pointwise orientation is represented by a unit-length vector, n( x, y, z) = (n1, n 2, n3)T. In addition to their self-structuring properties, nematics are dielectrically active and birefringent. These traits continue to lead to many important applications and discoveries. Numerical simulations of liquid crystal configurations are used to suggest the presence of new physical phenomena, analyze experiments, and optimize devices. This thesis develops a constrained energy-minimization finite-element method for the efficient computation of nematic liquid crystal equilibrium configurations based on a Lagrange multiplier formulation and the Frank-Oseen free-elastic energy model. First-order optimality conditions are derived and linearized via a Newton approach, yielding a linear system of equations. Due to the nonlinear unit-length constraint, novel well-posedness theory for the variational systems, as well as error analysis, is conducted. The approach is shown to constitute a convergent and well-posed approach, absent typical simplifying assumptions. Moreover, the energy-minimization method and well-posedness theory developed for the free-elastic case are extended to include the effects of applied electric fields and flexoelectricity. In the computational algorithm, nested iteration is applied and proves highly effective at reducing computational costs. Additionally, an alternative technique is studied, where the unit-length constraint is imposed by a penalty method. The performance of the penalty and Lagrange multiplier methods is compared. Furthermore, tailored trust-region strategies are introduced to improve robustness and efficiency. While both approaches yield effective algorithms, the Lagrange multiplier method demonstrates superior accuracy per unit cost. In

  3. Practical Use of Computationally Frugal Model Analysis Methods.

    Science.gov (United States)

    Hill, Mary C; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2016-03-01

    Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities-this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts. PMID:25810333

  4. A computational study of advanced exhaust system transition ducts with experimental validation

    Science.gov (United States)

    Wu, C.; Farokhi, S.; Taghavi, R.

    1992-01-01

    The current study is an application of CFD to a 'real' design and analysis environment. A subsonic, three-dimensional parabolized Navier-Stokes (PNS) code is used to construct stall margin design charts for optimum-length advanced exhaust systems' circular-to-rectangular transition ducts. Computer code validation has been conducted to examine the capability of wall static pressure predictions. The comparison of measured and computed wall static pressures indicates a reasonable accuracy of the PNS computer code results. Computations have also been conducted on 15 transition ducts, three area ratios, and five aspect ratios. The three area ratios investigated are constant area ratio of unity, moderate contracting area ratio of 0.8, and highly contracting area ratio of 0.5. The degree of mean flow acceleration is identified as a dominant parameter in establishing the minimum duct length requirement. The effect of increasing aspect ratio in the minimum length transition duct is to increase the length requirement, as well as to increase the mass-averaged total pressure losses. The design guidelines constructed from this investigation may aid in the design and manufacture of advanced exhaust systems for modern fighter aircraft.

  5. Recent Advances in Computational Simulation of Macro-, Meso-, and Micro-Scale Biomimetics Related Fluid Flow Problems

    Institute of Scientific and Technical Information of China (English)

    Y. Y. Yan

    2007-01-01

    Over the last decade, computational methods have been intensively applied to a variety of scientific researches and engineering designs. Although the computational fluid dynamics (CFD) method has played a dominant role in studying and simulating transport phenomena involving fluid flow and heat and mass transfers, in recent years, other numerical methods for the simulations at meso- and micro-scales have also been actively applied to solve the physics of complex flow and fluid-interface interactions. This paper presents a review of recent advances in multi-scale computational simulation of biomimetics related fluid flow problems. The state-of-the-art numerical techniques, such as lattice Boltzmann method (LBM), molecular dynamics (MD), and conventional CFD, applied to different problems such as fish flow, electro-osmosis effect of earthworm motion, and self-cleaning hydrophobic surface, and the numerical approaches are introduced. The new challenging of modelling biomimetics problems in developing the physical conditions of self-clean hydrophobic surfaces is discussed.

  6. Costs evaluation methodic of energy efficient computer network reengineering

    Directory of Open Access Journals (Sweden)

    S.A. Nesterenko

    2016-09-01

    Full Text Available A key direction of modern computer networks reengineering is their transfer to a new energy-saving technology IEEE 802.3az. To make a reasoned decision about the transition to the new technology is needed a technique that allows network engineers to answer the question about the economic feasibility of a network upgrade. Aim: The aim of this research is development of methodic for calculating the cost-effectiveness of energy-efficient computer network reengineering. Materials and Methods: The methodic uses analytical models for calculating power consumption of a computer network port operating in IEEE 802.3 standard and energy-efficient mode of IEEE 802.3az standard. For frame transmission time calculation in the communication channel used the queuing model. To determine the values of the network operation parameters proposed to use multiagent network monitoring method. Results: The methodic allows calculating the economic impact of a computer network transfer to energy-saving technology IEEE 802.3az. To determine the network performance parameters proposed to use network SNMP monitoring systems based on RMON MIB agents.

  7. Proposed congestion control method for cloud computing environments

    Directory of Open Access Journals (Sweden)

    Shin-ichi Kuribayashi

    2011-10-01

    Full Text Available As cloud computing services rapidly expand their customer base, it has become important to share cloudresources, so as to provide them economically. In cloud computing services, multiple types of resources, such as processing ability, bandwidth and storage, need to be allocated simultaneously. If there is a surge of requests, a competition will arise between these requests for the use of cloud resources. This leads to the disruption of the service and it is necessary to consider a measure to avoid or relieve congestion of cloud computing environments. This paper proposes a new congestion control method for cloud computing environments which reduces the size of required resource for congested resource type instead of restricting all service requests as in the existing networks. Next, this paper proposes the user service specifications for the proposed congestion control method, and clarifies the algorithm to decide the optimal size of required resource to be reduced, based on the load offered to the system. It is demonstrated by simulation evaluations that the proposed method can handle more requests compared with the conventional methods and relieve the congestion. Then, this paper proposes to enhance the proposed method, so as to enable the fair resource allocation among users in congested situation.

  8. Advanced communication methods developed for nuclear data communication applications

    International Nuclear Information System (INIS)

    We conducted various experiments and tested data communications methods that may be useful for various applications in nuclear industries. We explored the following areas. I. Scientific data communication among scientists within the laboratory and inter-laboratory data exchange. 2.Data from sensors from remote and wired sensors. 3.Data from multiple sensors with small zone. 4.Data from single or multiple sensors from distances above 100 m and less than 10 km. No any single data communication method was found to be the best solution for nuclear applications and multiple modes of communication were found to be advantageous than any single mode of data communication. Network of computers in the control room and in between laboratories connected with optical fiber or an isolated Ethernet coaxial LAN was found to be optimum. Information from multiple analog process sensors in smaller zones like reactor building and laboratories on 12C LAN and short-range wireless LAN were found to be advantageous. Within the laboratory sensor data network of 12C was found to be cost effective and wireless LAN was comparatively expansive. Within a room infrared optical LAN and FSK wireless LAN were found to be highly useful in making the sensors free from wires. Direct sensor interface on FSK wireless link were found to be fast accurate, cost effective over large distance data communication. Such links are the only way to communicate from sea boy and balloons hardware. 1-wire communication network of Dallas Semiconductor USA for weather station data communication Computer to computer communication using optical LAN links has been tried, temperature pressure, humidity, ionizing radiation, generator RPM and voltage and various other analog signals were also transported o FSK optical and wireless links. Multiple sensors needed a dedicated data acquisition system and wireless LAN for data telemetry. (author)

  9. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  10. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    Science.gov (United States)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  11. Numerical modeling of spray combustion with an advanced VOF method

    Science.gov (United States)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  12. NATO Advanced Research Workshop on Exploiting Mental Imagery with Computers in Mathematics Education

    CERN Document Server

    Mason, John

    1995-01-01

    The advent of fast and sophisticated computer graphics has brought dynamic and interactive images under the control of professional mathematicians and mathematics teachers. This volume in the NATO Special Programme on Advanced Educational Technology takes a comprehensive and critical look at how the computer can support the use of visual images in mathematical problem solving. The contributions are written by researchers and teachers from a variety of disciplines including computer science, mathematics, mathematics education, psychology, and design. Some focus on the use of external visual images and others on the development of individual mental imagery. The book is the first collected volume in a research area that is developing rapidly, and the authors pose some challenging new questions.

  13. A comparative study of computational methods in cosmic gas dynamics

    Science.gov (United States)

    Van Albada, G. D.; Van Leer, B.; Roberts, W. W., Jr.

    1982-01-01

    Many theoretical investigations of fluid flows in astrophysics require extensive numerical calculations. The selection of an appropriate computational method is, therefore, important for the astronomer who has to solve an astrophysical flow problem. The present investigation has the objective to provide an informational basis for such a selection by comparing a variety of numerical methods with the aid of a test problem. The test problem involves a simple, one-dimensional model of the gas flow in a spiral galaxy. The numerical methods considered include the beam scheme, Godunov's method (G), the second-order flux-splitting method (FS2), MacCormack's method, and the flux corrected transport methods of Boris and Book (1973). It is found that the best second-order method (FS2) outperforms the best first-order method (G) by a huge margin.

  14. Computer Hardware, Advanced Mathematics and Model Physics pilot project final report

    International Nuclear Information System (INIS)

    The Computer Hardware, Advanced Mathematics and Model Physics (CHAMMP) Program was launched in January, 1990. A principal objective of the program has been to utilize the emerging capabilities of massively parallel scientific computers in the challenge of regional scale predictions of decade-to-century climate change. CHAMMP has already demonstrated the feasibility of achieving a 10,000 fold increase in computational throughput for climate modeling in this decade. What we have also recognized, however, is the need for new algorithms and computer software to capitalize on the radically new computing architectures. This report describes the pilot CHAMMP projects at the DOE National Laboratories and the National Center for Atmospheric Research (NCAR). The pilot projects were selected to identify the principal challenges to CHAMMP and to entrain new scientific computing expertise. The success of some of these projects has aided in the definition of the CHAMMP scientific plan. Many of the papers in this report have been or will be submitted for publication in the open literature. Readers are urged to consult with the authors directly for questions or comments about their papers

  15. Viscous-Inviscid Coupling Methods for Advanced Marine Propeller Applications

    Directory of Open Access Journals (Sweden)

    Martin Greve

    2012-01-01

    Full Text Available The paper reports the development of coupling strategies between an inviscid direct panel method and a viscous RANS method and their application to complex propeller ows. The work is motivated by the prohibitive computational cost associated to unsteady viscous flow simulations using geometrically resolved propellers to analyse the dynamics of ships in seaways. The present effort aims to combine the advantages of the two baseline methods in order to reduce the numerical effort without compromising the predictive accuracy. Accordingly, the viscous method is used to calculate the global flow field, while the inviscid method predicts the forces acting on the propeller. The corresponding reaction forces are employed as body forces to mimic the propeller influence on the viscous flow field. Examples included refer to simple verification cases for an isolated propeller blade, open-water validation simulations for a complete propeller, and more challenging investigations of a manoeuvring vessel in seaways. Reported results reveal a fair predictive agreement between the coupled approach and fully viscous simulations and display the efficiency of the coupled approach.

  16. Computational Methods for Dynamic Stability and Control Derivatives

    Science.gov (United States)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  17. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  18. Advanced methods for the study of PWR cores

    International Nuclear Information System (INIS)

    This document gathers the transparencies presented at the 6. technical session of the French nuclear energy society (SFEN) in October 2003. The transparencies of the annual meeting are presented in the introductive part: 1 - status of the French nuclear park: nuclear energy results, management of an exceptional climatic situation: the heat wave of summer 2003 and the power generation (J.C. Barral); 2 - status of the research on controlled thermonuclear fusion (J. Johner). Then follows the technical session about the advanced methods for the study of PWR reactor cores: 1 - the evolution approach of study methodologies (M. Lambert, J. Pelet); 2 - the point of view of the nuclear safety authority (D. Brenot); 3 - the improved decoupled methodology for the steam pipe rupture (S. Salvatores, J.Y. Pouliquen); 4 - the MIR method for the pellet-clad interaction (renovated IPG methodology) (E. Baud, C. Royere); 5 - the improved fuel management (IFM) studies for Koeberg (C. Cohen); 6 - principle of the methods of accident study implemented for the European pressurized reactor (EPR) (F. Foret, A. Ferrier); 7 - accident studies with the EPR, steam pipe rupture (N. Nicaise, S. Salvatores); 8 - the co-development platform, a new generation of software tools for the new methodologies (C. Chauliac). (J.S.)

  19. POINCARE-LIGHTHILL-KUO METHOD AND SYMBOLIC COMPUTATION

    Institute of Scientific and Technical Information of China (English)

    戴世强

    2001-01-01

    This paper elucidates the effectiveness of combining the Poincare-Lighthill-Kuo method( PLK method, for short) and symbolic computation. Firstly, the idea and history of the PLK method are briefly introduced. Then, the difficulty of intermediate expression swell, often encountered in symbolic computation, is outlined. For overcoming the difficulty, a semi-inverse algorithm was proposed by the author, with which the lengthy parts of intermediate expressions are first frozen in the form of symbols till the final stage of seeking perturbation solutions. To discuss the applications of the above algorithm, the related work of the author and his research group on nonlinear oscillations and waves is concisely reviewed. The computer-extended perturbation solution of the Duffing equation shows that the csymptotic solution obtained with the PLK method possesses the convergence radius of 1 and thus the range of validity of the solution is considerably enlarged. The studies on internal solitary waves in stratified fluid and on the head-on collision between two solitary waves in a hyperelastic rod indicate that by means of the presented methods, very complicated manipulation, unconceivable in hand calculation, can be conducted and thus result in higher-order evolution equations and asymptotic solutions. The examples illustrate that the algorithm helps to realize the symbolic computation on micro-commputers. Finally,it is concluded that with the aid of symbolic computation, the vitality of the PLK method is greatly strengthened and at least for the solutions to conservative systems of oscillations and waves, it is a powerful tool.

  20. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  1. Micro-computed tomography: an alternative method for shark ageing.

    Science.gov (United States)

    Geraghty, P T; Jones, A S; Stewart, J; Macbeth, W G

    2012-04-01

    Micro-computed tomography (microCT) produced 3D reconstructions of shark Carcharhinus brevipinna vertebrae that could be virtually sectioned along any desired plane, and upon which growth bands were readily visible. When compared to manual sectioning, it proved to be a valid and repeatable means of ageing and offers several distinct advantages over other ageing methods. PMID:22497384

  2. Comparison of advanced iterative reconstruction methods for SPECT/CT

    International Nuclear Information System (INIS)

    Aim: Corrective image reconstruction methods which produce reconstructed images with improved spatial resolution and decreased noise level became recently commercially available. In this work, we tested the performance of three new software packages with reconstruction schemes recommended by the manufacturers using physical phantoms simulating realistic clinical settings. Methods: A specially designed resolution phantom containing three 99mTc lines sources and the NEMA NU-2 image quality phantom were acquired on three different SPECT/CT systems (General Electrics Infinia, Philips BrightView and Siemens Symbia T6). Measurement of both phantoms was done with the trunk filled with a 99mTc-water solution. The projection data were reconstructed using the GE's Evolution for Bone registered, Philips Astonish registered and Siemens Flash3D registered software. The reconstruction parameters employed (number of iterations and subsets, the choice of post-filtering) followed theses recommendations of each vendor. These results were compared with reference reconstructions using the ordered subset expectation maximization (OSEM) reconstruction scheme. Results: The best results (smallest value for resolution, highest percent contrast values) for all three packages were found for the scatter corrected data without applying any post-filtering. The advanced reconstruction methods improve the full width at half maximum (FWHM) of the line sources from 11.4 to 9.5 mm (GE), from 9.1 to 6.4 mm (Philips), and from 12.1 to 8.9 mm (Siemens) if no additional post filter was applied. The total image quality control index measured for a concentration ratio of 8:1 improves for GE from 147 to 189, from 179. to 325 for Philips and from 217 to 320 for Siemens using the reference method for comparison. The same trends can be observed for the 4:1 concentration ratio. The use of a post-filter reduces the background variability approximately by a factor of two, but deteriorates significantly the

  3. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  4. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  5. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or c

  6. Computer-aided engineering methods for successful VHSIC application

    Science.gov (United States)

    Wood, R. Gary

    Through the example of a VHSIC implementation of a MIL-STD-1750A avionic processor subsystem, an effective approach has applied CAE methods tailored to the job of VHSIC integration. Structured hierarchical design organization combined with rigorous mixed-mode digital simulation permitted an entire VHSIC-based subsystem and its integral application-specific IC design to be verified with a high degree of confidence. Accurate performance data were obtained well in advance of fabrication.

  7. Unconventional methods of imaging: computational microscopy and compact implementations

    Science.gov (United States)

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.

  8. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  9. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  10. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  11. Advanced Simulation and Computing FY10-11 Implementation Plan Volume 2, Rev. 0

    Energy Technology Data Exchange (ETDEWEB)

    Carnes, B

    2009-06-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  12. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  13. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  14. Method of public support evaluation for advanced NPP deployment

    International Nuclear Information System (INIS)

    Public support of nuclear power could be fully recovered only if the public would, from the very beginning of the new power source selection process, receive transparent information and was made a part of interactive dialogue. The presented method was developed with the objective to facilitate the complex process of the utilities - public interaction. Our method of the public support evaluation allows to classify designs of new nuclear power plants taking into consideration the public attitude to continued nuclear power deployment in the Czech Republic as well as the preference of a certain plant design. The method is based on the model with a set of probabilistic input metrics, which permits to compare the offered concepts with the reference one, with a high degree of objectivity. This method is a part of the more complex evaluation procedure applicable for the new designs assessment that uses the computer code ''Potencial'' developed at the NRI Rez plc. The metrics of the established public support criteria are discussed. (author)

  15. Computational methods for physics and mathematics with Fortran and C++ programmes

    CERN Document Server

    Singh, N

    2016-01-01

    COMPUTATIONAL METHODS FOR PHYSICS is designed as a simple text for graduate students in physics and mathematics, MCA, M.Tech., B.Tech and B.Sc. computer science students. Each chapter has ability to illustrate mathematical concepts, knowledge of which is essential in developing a complete understanding of the numerical analysis. Two chapters, which are related to physics problems: Chapter 11 Simulation techniques related to physics (simple problems) and Chapter 12 Simulation techniques related to physics (advanced problems), have been introduced in the book. The Methods, Roots of equations, Interpolation, Solution of simultaneous equations, Eigen values and eigen vectors of symmetric matrices, Solution of first order ordinary differential equations, Solution of second order ordinary differential equations and Solution of partial differential equations. It provides numerical methods needed by physicists and mathematicians for doing research in their respective fields. It will also be useful to researchers in e...

  16. Computed Tomograpy Venography diagnosis of iliocaval venous obstruction in advanced chronic venous insufficiency

    Directory of Open Access Journals (Sweden)

    Fabio Henrique Rossi

    2014-12-01

    Full Text Available Objective:Iliocaval obstruction is associated with venous hypertension symptoms and may predispose to deep venous thrombosis (DVT. Ultrasonography may fail to achieve noninvasive diagnosis of these obstructions. The possibility of using Computed Tomography Venography (CTV for these diagnoses is under investigation.Methods:Patients with CVI graded at CEAP clinical classes 3 to 6 and previous treatment failure underwent evaluation with CTV. Percentage obstruction was rated by two independent examiners. Obstruction prevalence and its associations with risk factors and CEAP classification were analyzed.Results:A total of 112 limbs were prospectively evaluated. Mean patient age was 55.8 years and 75.4% were women. Obstructions involved the left lower limb in 71.8% of cases and 35.8% of patients reported a medical history of deep venous thrombosis. Overall, 57.1% of imaging studies demonstrated venous obstruction of at least 50% and 10.7% showed obstruction of >80%. The only risk factor that was found to be independently associated with a significantly higher incidence of >50% venous obstruction was a medical history of DVT (p=0.035 (Fisher's exact test. There was a positive relationship between clinical classification (CEAP and degree of venous obstruction in the limbs studied (Chi-square test for linear trend; p=0.011.Conclusion:Patients with advanced CVI are often affected by obstructions in the iliocaval venous territory and CTV is able to diagnose the degree of obstruction. There is a positive association between degree of obstruction and both previous history of DVT and severity of symptoms of CVI.

  17. The effects of advance organizers according learning styles in computer assisted instruction software on academic achievement

    Directory of Open Access Journals (Sweden)

    Buket Demir , Ertuğrul Usta

    2011-09-01

    Full Text Available This study aims to investigate the effects of advance organizers existing in computer assisted instruction software on academic achievement of the students who have different types of learning styles. Semi–empirical design with pretest–posttest and with control group was used. The research sample was composed of 131students having Information Technology Course in Süleyman Türkmani Primary School located in Kırşehir in 2010–2011 academic year. Research data was collected by using Kolb’s Learning Style Inventory and Academic Achievement Test (KR–20: 0,82. One way ANOVA and Independent Sample T-Test were conducted on the all data collected and these results were emerged: The existence of advance organizers in a instructional software was affect the the academic achievement of students. There was also difference between the academic achievement of field independent learners whom studied in the computer assisted environment which was both include advance organizer and not include.

  18. The effects of advance organizers according learning styles in computer assisted instruction software on academic achievement

    Directory of Open Access Journals (Sweden)

    Buket Demir

    2011-09-01

    Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 This study aims to investigate the effects of advance organizers existing in computer assisted instruction software on academic achievement of the students who have different types of learning styles. Semi–empirical design with Pretest–posttest and with control group was used. The research sample was composed of 131students having Information Technology Course in Süleyman Türkmani Primary School located in Kırşehir in 2010–2011 academic year. Research data was collected by using Kolb’s Learning Style Inventory and Academic Achievement Test (KR–20: 0,82. One way ANOVA and Independent Sample T-Test were conducted on the all data collected and these results were emerged: The existence of advance organizers in a instructional software was affect the the academic achievement of students. There was also difference between the academic achievement of field independent learners whom studied in the computer assisted environment which was both include advance organizer and not include.

  19. High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster

    International Nuclear Information System (INIS)

    High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation

  20. Using Computer-Assisted Argumentation Mapping to develop effective argumentation skills in high school advanced placement physics

    Science.gov (United States)

    Heglund, Brian

    Educators recognize the importance of reasoning ability for development of critical thinking skills, conceptual change, metacognition, and participation in 21st century society. There is a recognized need for students to improve their skills of argumentation, however, argumentation is not explicitly taught outside logic and philosophy---subjects that are not part of the K-12 curriculum. One potential way of supporting the development of argumentation skills in the K-12 context is through incorporating Computer-Assisted Argument Mapping to evaluate arguments. This quasi-experimental study tested the effects of such argument mapping software and was informed by the following two research questions: 1. To what extent does the collaborative use of Computer-Assisted Argumentation Mapping to evaluate competing theories influence the critical thinking skill of argument evaluation, metacognitive awareness, and conceptual knowledge acquisition in high school Advanced Placement physics, compared to the more traditional method of text tables that does not employ Computer-Assisted Argumentation Mapping? 2. What are the student perceptions of the pros and cons of argument evaluation in the high school Advanced Placement physics environment? This study examined changes in critical thinking skills, including argumentation evaluation skills, as well as metacognitive awareness and conceptual knowledge, in two groups: a treatment group using Computer-Assisted Argumentation Mapping to evaluate physics arguments, and a comparison group using text tables to evaluate physics arguments. Quantitative and qualitative methods for collecting and analyzing data were used to answer the research questions. Quantitative data indicated no significant difference between the experimental groups, and qualitative data suggested students perceived pros and cons of argument evaluation in the high school Advanced Placement physics environment, such as self-reported sense of improvement in argument

  1. Direction and Integration of Experimental Ground Test Capabilities and Computational Methods

    Science.gov (United States)

    Dunn, Steven C.

    2016-01-01

    This paper groups and summarizes the salient points and findings from two AIAA conference panels targeted at defining the direction, with associated key issues and recommendations, for the integration of experimental ground testing and computational methods. Each panel session utilized rapporteurs to capture comments from both the panel members and the audience. Additionally, a virtual panel of several experts were consulted between the two sessions and their comments were also captured. The information is organized into three time-based groupings, as well as by subject area. These panel sessions were designed to provide guidance to both researchers/developers and experimental/computational service providers in defining the future of ground testing, which will be inextricably integrated with the advancement of computational tools.

  2. Computational methods and software systems for dynamics and control of large space structures

    Science.gov (United States)

    Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.

    1990-01-01

    This final report on computational methods and software systems for dynamics and control of large space structures covers progress to date, projected developments in the final months of the grant, and conclusions. Pertinent reports and papers that have not appeared in scientific journals (or have not yet appeared in final form) are enclosed. The grant has supported research in two key areas of crucial importance to the computer-based simulation of large space structure. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area, as reported here, involves massively parallel computers.

  3. Computing the Casimir energy using the point-matching method

    CERN Document Server

    Lombardo, F C; Váquez, M; Villar, P I

    2009-01-01

    We use a point-matching approach to numerically compute the Casimir interaction energy for a two perfect-conductor waveguide of arbitrary section. We present the method and describe the procedure used to obtain the numerical results. At first, our technique is tested for geometries with known solutions, such as concentric and eccentric cylinders. Then, we apply the point-matching technique to compute the Casimir interaction energy for new geometries such as concentric corrugated cylinders and cylinders inside conductors with focal lines.

  4. Compilation of the computing methods in radiation shielding

    International Nuclear Information System (INIS)

    In order to update the KAERI radiation shielding technology, the calculational shielding methods were surveyed throughly. Computer codes and data libraries for radiation shielding calculation were collected and some model calculations were carried out with them. So far the following materials were ensured for our future use: 23 shielding codes, 7 data libraries, 7 data processing codes and 11 peripheral shielding codes. All of these were compiled again for the CYBER-73 computer system, and will be widely used in shielding analysis of accelerators, shipping casks as well as nuclear power plant. (author)

  5. Computational Methods for Structural Mechanics and Dynamics, part 1

    Science.gov (United States)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

  6. An effective method for computing the noise in biochemical networks

    Science.gov (United States)

    Zhang, Jiajun; Nie, Qing; He, Miao; Zhou, Tianshou

    2013-02-01

    We present a simple yet effective method, which is based on power series expansion, for computing exact binomial moments that can be in turn used to compute steady-state probability distributions as well as the noise in linear or nonlinear biochemical reaction networks. When the method is applied to representative reaction networks such as the ON-OFF models of gene expression, gene models of promoter progression, gene auto-regulatory models, and common signaling motifs, the exact formulae for computing the intensities of noise in the species of interest or steady-state distributions are analytically given. Interestingly, we find that positive (negative) feedback does not enlarge (reduce) noise as claimed in previous works but has a counter-intuitive effect and that the multi-OFF (or ON) mechanism always attenuates the noise in contrast to the common ON-OFF mechanism and can modulate the noise to the lowest level independently of the mRNA mean. Except for its power in deriving analytical expressions for distributions and noise, our method is programmable and has apparent advantages in reducing computational cost.

  7. An expanded framework for the advanced computational testing and simulation toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Marques, Osni A.; Drummond, Leroy A.

    2003-11-09

    The Advanced Computational Testing and Simulation (ACTS) Toolkit is a set of computational tools developed primarily at DOE laboratories and is aimed at simplifying the solution of common and important computational problems. The use of the tools reduces the development time for new codes and the tools provide functionality that might not otherwise be available. This document outlines an agenda for expanding the scope of the ACTS Project based on lessons learned from current activities. Highlights of this agenda include peer-reviewed certification of new tools; finding tools to solve problems that are not currently addressed by the Toolkit; working in collaboration with other software initiatives and DOE computer facilities; expanding outreach efforts; promoting interoperability, further development of the tools; and improving functionality of the ACTS Information Center, among other tasks. The ultimate goal is to make the ACTS tools more widely used and more effective in solving DOE's and the nation's scientific problems through the creation of a reliable software infrastructure for scientific computing.

  8. Advances in Modal Analysis Using a Robust and Multiscale Method

    Directory of Open Access Journals (Sweden)

    Frisson Christian

    2010-01-01

    Full Text Available Abstract This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.

  9. A granular computing method for nonlinear convection-diffusion equation

    Directory of Open Access Journals (Sweden)

    Tian Ya Lan

    2016-01-01

    Full Text Available This paper introduces a method of solving nonlinear convection-diffusion equation (NCDE, based on the combination of granular computing (GrC and characteristics finite element method (CFEM. The key idea of the proposed method (denoted as GrC-CFEM is to reconstruct the solution from coarse-grained layer to fine-grained layer. It first gets the nonlinear solution on the coarse-grained layer, and then the function (Taylor expansion is applied to linearize the NCDE on the fine-grained layer. Switch to the fine-grained layer, the linear solution is directly derived from the nonlinear solution. The full nonlinear problem is solved only on the coarse-grained layer. Numerical experiments show that the GrC-CFEM can accelerate the convergence and improve the computational efficiency without sacrificing the accuracy.

  10. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  11. Reducing Total Power Consumption Method in Cloud Computing Environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2012-01-01

    The widespread use of cloud computing services is expected to increase the power consumed by ICT equipment in cloud computing environments rapidly. This paper first identifies the need of the collaboration among servers, the communication network and the power network, in order to reduce the total power consumption by the entire ICT equipment in cloud computing environments. Five fundamental policies for the collaboration are proposed and the algorithm to realize each collaboration policy is outlined. Next, this paper proposes possible signaling sequences to exchange information on power consumption between network and servers, in order to realize the proposed collaboration policy. Then, in order to reduce the power consumption by the network, this paper proposes a method of estimating the volume of power consumption by all network devices simply and assigning it to an individual user.

  12. Computing through holistic systems design method: material formations workshop

    Directory of Open Access Journals (Sweden)

    Sevil Yazici

    2011-12-01

    Full Text Available The emergence of interdisciplinary fields of interests in architecture (digital media, more specifically computational design tools has transformed the common design, analysis, and building processes into a new discipline in which the skills and knowledge required of the architectural designer should be adapted. With the exception of a few innovative higher education institutions, there is no across the board academic implementation regarding emerging computational design applications at undergraduate level in architecture. Material Formations Workshop, organised by the Department of Architecture, Istanbul Technical University, investigated the logic of computing through Holistic Systems Design Method (HSDM. All participants had diverse skills and knowledge and used HSDM to deal with multi parameter problems as observed in nature where physical material was the main derivative of the design process. Specific rules were defined, and relationships were established among system components. Possible future lines of the research are also suggested in this paper.

  13. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  14. Computational methods to dissect cis-regulatory transcriptional networks

    Indian Academy of Sciences (India)

    Vibha Rani

    2007-12-01

    The formation of diverse cell types from an invariant set of genes is governed by biochemical and molecular processes that regulate gene activity. A complete understanding of the regulatory mechanisms of gene expression is the major function of genomics. Computational genomics is a rapidly emerging area for deciphering the regulation of metazoan genes as well as interpreting the results of high-throughput screening. The integration of computer science with biology has expedited molecular modelling and processing of large-scale data inputs such as microarrays, analysis of genomes, transcriptomes and proteomes. Many bioinformaticians have developed various algorithms for predicting transcriptional regulatory mechanisms from the sequence, gene expression and interaction data. This review contains compiled information of various computational methods adopted to dissect gene expression pathways.

  15. Computational methods for coupling microstructural and micromechanical materials response simulations

    Energy Technology Data Exchange (ETDEWEB)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  16. Radiation Mitigation Methods for Advanced Readout Array Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA is interested in the development of advanced instruments and instrument components for planetary science missions. Specifically, an area of importance in...

  17. Advances in computational dynamics of particles, materials and structures a unified approach

    CERN Document Server

    Har, Jason

    2012-01-01

    Computational methods for the modeling and simulation of the dynamic response and behavior of particles, materials and structural systems have had a profound influence on science, engineering and technology. Complex science and engineering applications dealing with complicated structural geometries and materials that would be very difficult to treat using analytical methods have been successfully simulated using computational tools. With the incorporation of quantum, molecular and biological mechanics into new models, these methods are poised to play an even bigger role in the future. Ad

  18. Inspection of advanced computational lithography logic reticles using a 193-nm inspection system

    Science.gov (United States)

    Yu, Ching-Fang; Lin, Mei-Chun; Lai, Mei-Tsu; Hsu, Luke T. H.; Chin, Angus; Lee, S. C.; Yen, Anthony; Wang, Jim; Chen, Ellison; Wu, David; Broadbent, William H.; Huang, William; Zhu, Zinggang

    2010-09-01

    We report inspection results of early 22-nm logic reticles designed with both conventional and computational lithography methods. Inspection is performed using a state-of-the-art 193-nm reticle inspection system in the reticleplane inspection mode (RPI) where both rule-based sensitivity control (RSC) and a newer modelbased sensitivity control (MSC) method are tested. The evaluation includes defect detection performance using several special test reticles designed with both conventional and computational lithography methods; the reticles contain a variety of programmed critical defects which are measured based on wafer print impact. Also included are inspection results from several full-field product reticles designed with both conventional and computational lithography methods to determine if low nuisance-defect counts can be achieved. These early reticles are largely single-die and all inspections are performed in the die-to-database inspection mode only.

  19. THE DOMAIN DECOMPOSITION TECHNIQUES FOR THE FINITE ELEMENT PROBABILITY COMPUTATIONAL METHODS

    Institute of Scientific and Technical Information of China (English)

    LIU Xiaoqi

    2000-01-01

    In this paper, we shall study the domain decomposition techniques for the finite element probability computational methods. These techniques provide a theoretical basis for parallel probability computational methods.

  20. A new fault detection method for computer networks

    International Nuclear Information System (INIS)

    Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method

  1. Applications of meshless methods for damage computations with finite strains

    Science.gov (United States)

    Pan, Xiaofei; Yuan, Huang

    2009-06-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis.

  2. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information

    Energy Technology Data Exchange (ETDEWEB)

    Hey, Tony [eScience Institute, University of Washington; Agarwal, Deborah [Lawrence Berkeley National Laboratory; Borgman, Christine [University of California, Los Angeles; Cartaro, Concetta [SLAC National Accelerator Laboratory; Crivelli, Silvia [Lawrence Berkeley National Laboratory; Van Dam, Kerstin Kleese [Pacific Northwest National Laboratory; Luce, Richard [University of Oklahoma; Arjun, Shankar [CADES, Oak Ridge National Laboratory; Trefethen, Anne [University of Oxford; Wade, Alex [Microsoft Research, Microsoft Corporation; Williams, Dean [Lawrence Livermore National Laboratory

    2015-09-04

    The Advanced Scientific Computing Advisory Committee (ASCAC) was charged to form a standing subcommittee to review the Department of Energy’s Office of Scientific and Technical Information (OSTI) and to begin by assessing the quality and effectiveness of OSTI’s recent and current products and services and to comment on its mission and future directions in the rapidly changing environment for scientific publication and data. The Committee met with OSTI staff and reviewed available products, services and other materials. This report summaries their initial findings and recommendations.

  3. Recent advances in computational biology, bioinformatics, medicine, and healthcare by modern OR

    OpenAIRE

    Türkay, Metin; Weber, Gerhard-Wilhelm; Blazewicz, Jacek; Rauner, Marion

    2014-01-01

    CEJOR (2014) 22:427–430 DOI 10.1007/s10100-013-0327-2 EDITORIAL Recent advances in computational biology, bioinformatics, medicine, and healthcare by modern OR Gerhard-Wilhelm Weber · Jacek Blazewicz · Marion Rauner · Metin Türkay Published online: 7 September 2013 © Springer-Verlag Berlin Heidelberg 2013 At the occasion of the 25th European Conference on Operational Research, EURO XXV 2012, July 8–11, 2012, in Vilnius, Lithuania (http://www.euro-2012.lt/), the ...

  4. Advanced Computing Technologies for Rocket Engine Propulsion Systems: Object-Oriented Design with C++

    Science.gov (United States)

    Bekele, Gete

    2002-01-01

    This document explores the use of advanced computer technologies with an emphasis on object-oriented design to be applied in the development of software for a rocket engine to improve vehicle safety and reliability. The primary focus is on phase one of this project, the smart start sequence module. The objectives are: 1) To use current sound software engineering practices, object-orientation; 2) To improve on software development time, maintenance, execution and management; 3) To provide an alternate design choice for control, implementation, and performance.

  5. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hendrickson, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individual work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.

  6. Integrated Computational Materials Engineering (ICME) for Third Generation Advanced High-Strength Steel Development

    Energy Technology Data Exchange (ETDEWEB)

    Savic, Vesna; Hector, Louis G.; Ezzat, Hesham; Sachdev, Anil K.; Quinn, James; Krupitzer, Ronald; Sun, Xin

    2015-06-01

    This paper presents an overview of a four-year project focused on development of an integrated computational materials engineering (ICME) toolset for third generation advanced high-strength steels (3GAHSS). Following a brief look at ICME as an emerging discipline within the Materials Genome Initiative, technical tasks in the ICME project will be discussed. Specific aims of the individual tasks are multi-scale, microstructure-based material model development using state-of-the-art computational and experimental techniques, forming, toolset assembly, design optimization, integration and technical cost modeling. The integrated approach is initially illustrated using a 980 grade transformation induced plasticity (TRIP) steel, subject to a two-step quenching and partitioning (Q&P) heat treatment, as an example.

  7. Advances in Intelligent Modelling and Simulation Artificial Intelligence-Based Models and Techniques in Scalable Computing

    CERN Document Server

    Khan, Samee; Burczy´nski, Tadeusz

    2012-01-01

    One of the most challenging issues in today’s large-scale computational modeling and design is to effectively manage the complex distributed environments, such as computational clouds, grids, ad hoc, and P2P networks operating under  various  types of users with evolving relationships fraught with  uncertainties. In this context, the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. Moreover, uncertainties are presented to the system at hand in various forms of information that are incomplete, imprecise, fragmentary, or overloading, which hinders in the full and precise resolve of the evaluation criteria, subsequencing and selection, and the assignment scores. Intelligent scalable systems enable the flexible routing and charging, advanced user interactions and the aggregation and sharing of geographically-distributed resources in modern large-scale systems.   This book presents new ideas, theories, models...

  8. Methods for integrating optical fibers with advanced aerospace materials

    Science.gov (United States)

    Poland, Stephen H.; May, Russell G.; Murphy, Kent A.; Claus, Richard O.; Tran, Tuan A.; Miller, Mark S.

    1993-07-01

    Optical fibers are attractive candidates for sensing applications in near-term smart materials and structures, due to their inherent immunity to electromagnetic interference and ground loops, their capability for distributed and multiplexed operation, and their high sensitivity and dynamic range. These same attributes also render optical fibers attractive for avionics busses for fly-by-light systems in advanced aircraft. The integration of such optical fibers with metal and composite aircraft and aerospace materials, however, remains a limiting factor in their successful use in such applications. This paper first details methods for the practical integration of optical fiber waveguides and cable assemblies onto and into materials and structures. Physical properties of the optical fiber and coatings which affect the survivability of the fiber are then considered. Mechanisms for the transfer of the strain from matrix to fiber for sensor and data bus fibers integrated with composite structural elements are evaluated for their influence on fiber survivability, in applications where strain or impact is imparted to the assembly.

  9. Regenerative medicine: advances in new methods and technologies.

    Science.gov (United States)

    Park, Dong-Hyuk; Eve, David J

    2009-11-01

    The articles published in the journal Cell Transplantation - The Regenerative Medicine Journal over the last two years reveal the recent and future cutting-edge research in the fields of regenerative and transplantation medicine. 437 articles were published from 2007 to 2008, a 17% increase compared to the 373 articles in 2006-2007. Neuroscience was still the most common section in both the number of articles and the percentage of all manuscripts published. The increasing interest and rapid advance in bioengineering technology is highlighted by tissue engineering and bioartificial organs being ranked second again. For a similar reason, the methods and new technologies section increased significantly compared to the last period. Articles focusing on the transplantation of stem cell lineages encompassed almost 20% of all articles published. By contrast, the non-stem cell transplantation group which is made up primarily of islet cells, followed by biomaterials and fetal neural tissue, etc. comprised less than 15%. Transplantation of cells pre-treated with medicine or gene transfection to prolong graft survival or promote differentiation into the needed phenotype, was prevalent in the transplantation articles regardless of the kind of cells used. Meanwhile, the majority of non-transplantation-based articles were related to new devices for various purposes, characterization of unknown cells, medicines, cell preparation and/or optimization for transplantation (e.g. isolation and culture), and disease pathology. PMID:19865067

  10. Advanced methods of quality control in nuclear fuel fabrication

    International Nuclear Information System (INIS)

    Under pressure of current economic and electricity market situation utilities implement more demanding fuel utilization schemes including higher burn ups and thermal rates, longer fuel cycles and usage of Mo fuel. Therefore, fuel vendors have recently initiated new R and D programmes aimed at improving fuel quality, design and materials to produce robust and reliable fuel. In the beginning of commercial fuel fabrication, emphasis was given to advancements in Quality Control/Quality Assurance related mainly to product itself. During recent years, emphasis was transferred to improvements in process control and to implementation of overall Total Quality Management (TQM) programmes. In the area of fuel quality control, statistical control methods are now widely implemented replacing 100% inspection. This evolution, some practical examples and IAEA activities are described in the paper. The paper presents major findings of the latest IAEA Technical Meetings (TMs) and training courses in the area with emphasis on information received at the TM and training course held in 1999 and other latest publications to provide an overview of new developments in process/quality control, their implementation and results obtained including new approaches to QC

  11. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  12. Methods and computer codes for nuclear systems calculations

    Indian Academy of Sciences (India)

    B P Kochurov; A P Knyazev; A Yu Kwaretzkheli

    2007-02-01

    Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.

  13. Characterization of Meta-Materials Using Computational Electromagnetic Methods

    Science.gov (United States)

    Deshpande, Manohar; Shin, Joon

    2005-01-01

    An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

  14. Computational methods for efficient structural reliability and reliability sensitivity analysis

    Science.gov (United States)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  15. Improved Method of Blind Speech Separation with Low Computational Complexity

    Directory of Open Access Journals (Sweden)

    Kazunobu Kondo

    2011-01-01

    a frame-wise spectral soft mask method based on an interchannel power ratio of tentative separated signals in the frequency domain. The soft mask cancels the transfer function between sources and separated signals. A theoretical analysis of selection criteria and the soft mask is given. Performance and effectiveness are evaluated via source separation simulations and a computational estimate, and experimental results show the significantly improved performance of the proposed method. The segmental signal-to-noise ratio achieves 7 [dB] and 3 [dB], and the cepstral distortion achieves 1 [dB] and 2.5 [dB], in anechoic and reverberant conditions, respectively. Moreover, computational complexity is reduced by more than 80% compared with unmodified FDICA.

  16. COMPUTATIONAL FLOW RATE FEEDBACK AND CONTROL METHOD IN HYDRAULIC ELEVATORS

    Institute of Scientific and Technical Information of China (English)

    Xu Bing; Ma Jien; Lin Jianjie

    2005-01-01

    The computational flow rate feedback and control method, which can be used in proportional valve controlled hydraulic elevators, is discussed and analyzed. In a hydraulic elevator with this method, microprocessor receives pressure information from the pressure transducers and computes the flow rate through the proportional valve based on pressure-flow conversion real time algorithm. This hydraulic elevator is of lower cost and energy consumption than the conventional closed loop control hydraulic elevator whose flow rate is measured by a flow meter. Experiments are carried out on a test rig which could simulate the load of hydraulic elevator. According to the experiment results, the means to modify the pressure-flow conversion algorithm are pointed out.

  17. A Review of Failure Analysis Methods for Advanced 3D Microelectronic Packages

    Science.gov (United States)

    Li, Yan; Srinath, Purushotham Kaushik Muthur; Goyal, Deepak

    2016-01-01

    Advanced three dimensional (3D) packaging is a key enabler in driving form factor reduction, performance benefits, and package cost reduction, especially in the fast paced mobility and ultraportable consumer electronics segments. The high level of functional integration and the complex package architecture pose a significant challenge for conventional fault isolation (FI) and failure analysis (FA) methods. Innovative FI/FA tools and techniques are required to tackle the technical and throughput challenges. In this paper, the applications of FI and FA techniques such as Electro Optic Terahertz Pulse Reflectometry, 3D x-ray computed tomography, lock-in thermography, and novel physical sample preparation methods to 3D packages with package on package and stacked die with through silicon via configurations are reviewed, along with the key FI and FA challenges.

  18. The Second Development Method and Application Based on Ansys in Advanced Digital Manufacturing

    Institute of Scientific and Technical Information of China (English)

    SUN Yuantao; WANG Shaomei; ZHAO Zhangyan

    2006-01-01

    The computer aided engineering is aiming at the numerical simulation-the important link in the advanced digital manufacturing. Its second development based on Ansys platform can be carried out often. In common, the Visual Basic and APDL are important development tools and are applied in the product design at the same time. In the paper, the secondary development flow and method based on Ansys is described. The parameter design and analysis process of the bridge girder erecting equipment is carried on with Ansys software and its secondary development tools-APDL and Visual Basics, including the interact between the mode of Ansys batch solving and Visual Basic. The method speeds up design and enhances the product the quality and the performance.

  19. Electrochemical test methods for advanced battery and semiconductor technology

    Science.gov (United States)

    Hsu, Chao-Hung

    This dissertation consists of two studies. The first study was the evaluation of metallic materials for advanced lithium ion batteries and the second study was the determination of the dielectric constant k for the low-k materials. The advanced lithium ion battery is miniature for implantable medical devices and capable of being recharged from outside of the body using magnetic induction without physical connections. The stability of metallic materials employed in the lithium ion battery is one of the major safety concerns. Three types of materials---Pt-Ir alloy, Ti alloys, and stainless steels---were evaluated extensively in this study. The electrochemical characteristics of Pt-Ir alloy, Ti alloys, and stainless steels were evaluated in several types of battery electrolytes in order to determine the candidate materials for long-term use in lithium ion batteries. The dissolution behavior of these materials and the decomposition behavior of the battery electrolyte were investigated using the anodic potentiodynamic polarization (APP) technique. Lifetime prediction for metal dissolution was conducted using constant potential polarization (CPP) technique. The electrochemical impedance spectroscopy (EIS) technique was employed to investigate the metal dissolution behavior or the battery electrolyte decomposition at the open circuit potential (OCP). The scanning electron microscope (SEM) was used to observe the morphology changes after these tests. The effects of experimental factors on the corrosion behaviors of the metallic materials and stabilities of the battery electrolytes were also investigated using the 23 factorial design approach. Integration of materials having low dielectric constant k as interlayer dielectrics and/or low-resistivity conductors will partially solve the RC delay problem for the limiting performance of high-speed logic chips. The samples of JSR LKD 5109 material capped by several materials were evaluated by using EIS. The feasibility of using

  20. Computational Methods for Failure Analysis and Life Prediction

    Science.gov (United States)

    Noor, Ahmed K. (Compiler); Harris, Charles E. (Compiler); Housner, Jerrold M. (Compiler); Hopkins, Dale A. (Compiler)

    1993-01-01

    This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.