WorldWideScience

Sample records for ale computational methods

  1. Modified ICED-ALE method for astrogeophysical plasma flows

    International Nuclear Information System (INIS)

    Wu, S.T.; Song, M.T.; Dryer, M.

    1991-01-01

    The Implicit-Continuous-Eulerian-Difference Mesh-Arbitrary-Lagrangian-Eulerian (ICED-ALE) algorithm of Brackbill and Pracht (1973) is modified for the study of astrophysical plasma flows in which dynamical effects are important. In the present study the general-energy-conservation law is directly applied to the iteration process, with the total (kinetic-, specific-internal-, and magnetic-) energy density being obtained implicitly at the end of the process. An example is computed in which the convergence speed of the latter method is substantially faster than that of the former. The initializing calculation, or explicit phase, in which the electric current density, magnetic diffusion of field, energy augmentation, and zero-order approximation of the flow velocity are given, is described. Consideration is given to the iteration process, or implicit phase, from which the exact Lagrangian solution for energy density, velocity, and a magnetic field is obtained. 8 refs

  2. An Invariant-Preserving ALE Method for Solids under Extreme Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sambasivan, Shiv Kumar [Los Alamos National Laboratory; Christon, Mark A [Los Alamos National Laboratory

    2012-07-17

    We are proposing a fundamentally new approach to ALE methods for solids undergoing large deformation due to extreme loading conditions. Our approach is based on a physically-motivated and mathematically rigorous construction of the underlying Lagrangian method, vector/tensor reconstruction, remapping, and interface reconstruction. It is transformational because it deviates dramatically from traditionally accepted ALE methods and provides the following set of unique attributes: (1) a three-dimensional, finite volume, cell-centered ALE framework with advanced hypo-/hyper-elasto-plastic constitutive theories for solids; (2) a new physically and mathematically consistent reconstruction method for vector/tensor fields; (3) advanced invariant-preserving remapping algorithm for vector/tensor quantities; (4) moment-of-fluid (MoF) interface reconstruction technique for multi-material problems with solids undergoing large deformations. This work brings together many new concepts, that in combination with emergent cell-centered Lagrangian hydrodynamics methods will produce a cutting-edge ALE capability and define a new state-of-the-art. Many ideas in this work are new, completely unexplored, and hence high risk. The proposed research and the resulting algorithms will be of immediate use in Eulerian, Lagrangian and ALE codes under the ASC program at the lab. In addition, the research on invariant preserving reconstruction/remap of tensor quantities is of direct interest to ongoing CASL and climate modeling efforts at LANL. The application space impacted by this work includes Inertial Confinement Fusion (ICF), Z-pinch, munition-target interactions, geological impact dynamics, shock processing of powders and shaped charges. The ALE framework will also provide a suitable test-bed for rapid development and assessment of hypo-/hyper-elasto-plastic constitutive theories. Today, there are no invariant-preserving ALE algorithms for treating solids with large deformations. Therefore

  3. ALE finite volume method for free-surface Bingham plastic fluids with general curvilinear coordinates

    International Nuclear Information System (INIS)

    Nagai, Katsuaki; Ushijima, Satoru

    2010-01-01

    A numerical prediction method has been proposed to predict Bingham plastic fluids with free-surface in a two-dimensional container. Since the linear relationships between stress tensors and strain rate tensors are not assumed for non-Newtonian fluids, the liquid motions are described with Cauchy momentum equations rather than Navier-Stokes equations. The profile of a liquid surface is represented with the two-dimensional curvilinear coordinates which are represented in each computational step on the basis of the arbitrary Lagrangian-Eulerian (ALE) method. Since the volumes of the fluid cells are transiently changed in the physical space, the geometric conservation law is applied to the finite volume discretizations. As a result, it has been shown that the present method enables us to predict reasonably the Bingham plastic fluids with free-surface in a container.

  4. ALE finite volume method for free-surface Bingham plastic fluids with general curvilinear coordinates

    Science.gov (United States)

    Nagai, Katsuaki; Ushijima, Satoru

    2010-06-01

    A numerical prediction method has been proposed to predict Bingham plastic fluids with free-surface in a two-dimensional container. Since the linear relationships between stress tensors and strain rate tensors are not assumed for non-Newtonian fluids, the liquid motions are described with Cauchy momentum equations rather than Navier-Stokes equations. The profile of a liquid surface is represented with the two-dimensional curvilinear coordinates which are represented in each computational step on the basis of the arbitrary Lagrangian-Eulerian (ALE) method. Since the volumes of the fluid cells are transiently changed in the physical space, the geometric conservation law is applied to the finite volume discretizations. As a result, it has been shown that the present method enables us to predict reasonably the Bingham plastic fluids with free-surface in a container.

  5. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    International Nuclear Information System (INIS)

    Koniges, A.; Eder, E.; Liu, W.; Barnard, J.; Friedman, A.; Logan, G.; Fisher, A.; Masers, N.; Bertozzi, A.

    2011-01-01

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  6. An ALE formulation of embedded boundary methods for tracking boundary layers in turbulent fluid-structure interaction problems

    Science.gov (United States)

    Farhat, Charbel; Lakshminarayan, Vinod K.

    2014-04-01

    Embedded Boundary Methods (EBMs) for Computational Fluid Dynamics (CFD) are usually constructed in the Eulerian setting. They are particularly attractive for complex Fluid-Structure Interaction (FSI) problems characterized by large structural motions and deformations. They are also critical for flow problems with topological changes and FSI problems with cracking. For all of these problems, the alternative Arbitrary Lagrangian-Eulerian (ALE) methods are often unfeasible because of the issue of mesh crossovers. However for viscous flows, Eulerian EBMs for CFD do not track the boundary layers around dynamic rigid or flexible bodies. Consequently, the application of these methods to viscous FSI problems requires either a high mesh resolution in a large part of the computational fluid domain, or adaptive mesh refinement. Unfortunately, the first option is computationally inefficient, and the second one is labor intensive. For these reasons, an alternative approach is proposed in this paper for maintaining all moving boundary layers resolved during the simulation of a turbulent FSI problem using an EBM for CFD. In this approach, which is simple and computationally reasonable, the underlying non-body-fitted mesh is rigidly translated and/or rotated in order to track the rigid component of the motion of the dynamic obstacle. Then, the flow computations away from the embedded surface are performed using the ALE framework, and the wall boundary conditions are treated by the chosen Eulerian EBM for CFD. Hence, the solution of the boundary layer tracking problem proposed in this paper can be described as an ALE implementation of a given EBM for CFD. Its basic features are illustrated with the Large Eddy Simulation using a non-body-fitted mesh of a turbulent flow past an airfoil in heaving motion. Its strong potential for the solution of challenging FSI problems at reasonable computational costs is also demonstrated with the simulation of turbulent flows past a family of

  7. The ALE-method with triangular elements: direct convection of integration point values

    NARCIS (Netherlands)

    van Haaren, M.J.; van Haaren, M.J.; Stoker, H.C.; van den Boogaard, Antonius H.; Huetink, Han

    2000-01-01

    The arbitrary Lagrangian-Eulerian (ALE) finite element method is applied to the simulation of forming processes where material is highly deformed. Here, the split formulation is used: a Lagrangian step is done with an implicit finite element formulation, followed by an explicit (purely convective)

  8. Modelling ricochet of a cylinder on water using ALE and SPH methods

    Directory of Open Access Journals (Sweden)

    T DeVuyst

    2016-10-01

    Full Text Available The ricochet means the rebound off a surface and is a very important scenario in engineering applications. The specific case of an impact of a solid steel body on a water surface has been chosen for the ricochet example. This solid body hits the water surface with a certain velocity and angle and their dependency on the ricochet behaviour is of interest. This impact scenario can be further developed for more complex impact scenarios, like the ditching of aeroplanes, and has been extensively studied in the past. Due to that fact, it was decided to compare the two numerical analyses with each other; SPH in the internal developed code MCM at Cranfield University with the ALE method in the commercial programme LS-Dyna. The early state of the development was the reason that a 2D model was developed in the 3D solver and therefore verification with another method crucial. Therefore the two simulations were set up and the ricochet behaviour investigated. In contrast to the experimental results, these results demonstrate that independent of the numerical method, both models show an unexpected overproduction of ricochet at higher impact velocities, but agree in their over prediction. The benefits arising out of the collaborative approach of SPH and ALE to describe a problem are presented.

  9. Modeling warm dense matter experiments using the 3D ALE-AMR code and the move toward exascale computing

    Directory of Open Access Journals (Sweden)

    Koniges Alice

    2013-11-01

    Full Text Available The Neutralized Drift Compression Experiment II (NDCX II is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE hydrodynamics with Adaptive Mesh Refinement (AMR, has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. We also briefly discuss the effects of the move to exascale computing and related computational changes on general modeling codes in fusion.

  10. BODY OF JOGET ALE-ALE AS CAPITAL OF RESISTANCE

    Directory of Open Access Journals (Sweden)

    Salman Alfarisi

    2014-08-01

    Full Text Available This paper aims to understand how the phenomenon of the dancing body art Ale - ale. This art appears in the middle of the Sasak , Lombok , 1999. The three elements of this art , the music , song , and dance reflects the idea of resistance against the establishment . Demolition of the single truth Tuan Guru and cultural as the dominant group in society Sasak looked at the expression of dance dancing . Therefore , to understand the dimensions of the dancing resistance used qualitative - interpretative methods research design paradigm of cultural studies . The theory used is critical social theory , such as social practice theory of Bourdieu , Derrida deconstruction theory , and postmodern aesthetics in addition . Data acquisition is done by in-depth interviews , observation paratisipatoris , and documentation . An important finding of this study include first , dancing body memerlihatkan wealth dimensional art Ale - ale resistance due to the body , dancing can liberate ourselves from the reality of their urgency , urgency both in economic and social dimensions is constructed by an elite group of Sasak community . In this context , the body dancing is not only a personal expression of dancing , but also can be a practice arena power of dominant groups in society Sasak , namely Mr. Guru and culture . Second , not only the dancing body can be seen as a mere aesthetic expression but as a form of battle between the marginalized groups dancing with the dominant group , namely Mr. Master in religious and cultural problems in the context of culture

  11. A comparative study of interface reconstruction methods for multi-material ALE simulations

    International Nuclear Information System (INIS)

    Kucharik, Milan; Garimella, Rao V.; Schofield, Samuel P.; Shashkov, Mikhail J.

    2010-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs somewhat worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  12. Analogue computing methods

    CERN Document Server

    Welbourne, D

    1965-01-01

    Analogue Computing Methods presents the field of analogue computation and simulation in a compact and convenient form, providing an outline of models and analogues that have been produced to solve physical problems for the engineer and how to use and program the electronic analogue computer. This book consists of six chapters. The first chapter provides an introduction to analogue computation and discusses certain mathematical techniques. The electronic equipment of an analogue computer is covered in Chapter 2, while its use to solve simple problems, including the method of scaling is elaborat

  13. Comparison of ALE finite element method and adaptive smoothed finite element method for the numerical simulation of friction stir welding

    NARCIS (Netherlands)

    van der Stelt, A.A.; Bor, Teunis Cornelis; Geijselaers, Hubertus J.M.; Quak, W.; Akkerman, Remko; Huetink, Han

    2011-01-01

    In this paper, the material flow around the pin during friction stir welding (FSW) is simulated using a 2D plane strain model. A pin rotates without translation in a disc with elasto-viscoplastic material properties and the outer boundary of the disc is clamped. Two numerical methods are used to

  14. Measuring Extinction with ALE

    Science.gov (United States)

    Zimmer, Peter C.; McGraw, J. T.; Gimmestad, G. G.; Roberts, D.; Stewart, J.; Smith, J.; Fitch, J.

    2007-12-01

    ALE (Astronomical LIDAR for Extinction) is deployed at the University of New Mexico's (UNM) Campus Observatory in Albuquerque, NM. It has begun a year-long testing phase prior deployment at McDonald Observatory in support of the CCD/Transit Instrument II (CTI-II). ALE is designed to produce a high-precision measurement of atmospheric absorption and scattering above the observatory site every ten minutes of every moderately clear night. LIDAR (LIght Detection And Ranging) is the VIS/UV/IR analog of radar, using a laser, telescope and time-gated photodetector instead of a radio transmitter, dish and receiver. In the case of ALE -- an elastic backscatter LIDAR -- 20ns-long, eye-safe laser pulses are launched 2500 times per second from a 0.32m transmitting telescope co-mounted with a 50mm short-range receiver on an alt-az mounted 0.67m long-range receiver. Photons from the laser pulse are scattered and absorbed as the pulse propagates through the atmosphere, a portion of which are scattered into the field of view of the short- and long-range receiver telescopes and detected by a photomultiplier. The properties of a given volume of atmosphere along the LIDAR path are inferred from both the altitude-resolved backscatter signal as well as the attenuation of backscatter signal from altitudes above it. We present ALE profiles from the commissioning phase and demonstrate some of the astronomically interesting atmospheric information that can be gleaned from these data, including, but not limited to, total line-of-sight extinction. This project is funded by NSF Grant 0421087.

  15. A Cell-Centered Multiphase ALE Scheme With Structural Coupling

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, Timothy Alan [Univ. of California, Davis, CA (United States)

    2012-04-16

    A novel computational scheme has been developed for simulating compressible multiphase flows interacting with solid structures. The multiphase fluid is computed using a Godunov-type finite-volume method. This has been extended to allow computations on moving meshes using a direct arbitrary-Eulerian- Lagrangian (ALE) scheme. The method has been implemented within a Lagrangian hydrocode, which allows modeling the interaction with Lagrangian structural regions. Although the above scheme is general enough for use on many applications, the ultimate goal of the research is the simulation of heterogeneous energetic material, such as explosives or propellants. The method is powerful enough for application to all stages of the problem, including the initial burning of the material, the propagation of blast waves, and interaction with surrounding structures. The method has been tested on a number of canonical multiphase tests as well as fluid-structure interaction problems.

  16. An AMR capable finite element diffusion solver for ALE hydrocodes [An AMR capable diffusion solver for ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bailey, D. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kaiser, T. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Eder, D. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, B. T. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Masters, N. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Koniges, A. E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Anderson, R. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-02-01

    Here, we present a novel method for the solution of the diffusion equation on a composite AMR mesh. This approach is suitable for including diffusion based physics modules to hydrocodes that support ALE and AMR capabilities. To illustrate, we proffer our implementations of diffusion based radiation transport and heat conduction in a hydrocode called ALE-AMR. Numerical experiments conducted with the diffusion solver and associated physics packages yield 2nd order convergence in the L2 norm.

  17. Computer intensive statistical methods

    Science.gov (United States)

    Yakowitz, S.

    The special session “Computer-Intensive Statistical Methods” was held in morning and afternoon parts at the 1985 AGU Fall Meeting in San Francisco. Calif. Its mission was to provide a forum for hydrologists and statisticians who are active in bringing unconventional, algorithmic-oriented statistical techniques to bear on problems of hydrology. Statistician Emanuel Parzen (Texas A&M University, College Station, Tex.) opened the session by relating recent developments in quantile estimation methods and showing how properties of such methods can be used to advantage to categorize runoff data previously analyzed by I. Rodriguez-Iturbe (Universidad Simon Bolivar, Caracas, Venezuela). Statistician Eugene Schuster (University of Texas, El Paso) discussed recent developments in nonparametric density estimation which enlarge the framework for convenient incorporation of prior and ancillary information. These extensions were motivated by peak annual flow analysis. Mathematician D. Myers (University of Arizona, Tucson) gave a brief overview of “kriging” and outlined some recently developed methodology.

  18. Three-dimensional local ALE-FEM method for fluid flow in domains containing moving boundaries/objects interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David Bradley [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Monayem, A. K. M. [Univ. of New Mexico, Albuquerque, NM (United States); Mazumder, H. [Univ. of New Mexico, Albuquerque, NM (United States); Heinrich, Juan C. [Univ. of New Mexico, Albuquerque, NM (United States)

    2015-03-05

    A three-dimensional finite element method for the numerical simulations of fluid flow in domains containing moving rigid objects or boundaries is developed. The method falls into the general category of Arbitrary Lagrangian Eulerian methods; it is based on a fixed mesh that is locally adapted in the immediate vicinity of the moving interfaces and reverts to its original shape once the moving interfaces go past the elements. The moving interfaces are defined by separate sets of marker points so that the global mesh is independent of interface movement and the possibility of mesh entanglement is eliminated. The results is a fully robust formulation capable of calculating on domains of complex geometry with moving boundaries or devises that can also have a complex geometry without danger of the mesh becoming unsuitable due to its continuous deformation thus eliminating the need for repeated re-meshing and interpolation. Moreover, the boundary conditions on the interfaces are imposed exactly. This work is intended to support the internal combustion engines simulator KIVA developed at Los Alamos National Laboratories. The model's capabilities are illustrated through application to incompressible flows in different geometrical settings that show the robustness and flexibility of the technique to perform simulations involving moving boundaries in a three-dimensional domain.

  19. Computational methods in stochastic dynamics

    CERN Document Server

    Papadrakakis, Manolis; Papadopoulos, Vissarion

    2011-01-01

    Covering what is an emerging frontier in research, this book focuses on advanced computational methods and software tools. These can be of huge assistance in tackling complex problems in stochastic dynamic and seismic analysis as well as structure design.

  20. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  1. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos

    2017-01-01

    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  2. Lagrangian and ALE Formulations For Soil Structure Coupling with Explosive Detonation

    Directory of Open Access Journals (Sweden)

    M Souli

    2017-03-01

    Full Text Available Simulation of Soil-Structure Interaction becomes more and more the focus of computational engineering in civil and mechanical engineering, where FEM (Finite element Methods for structural and soil mechanics and Finite Volume for CFD are dominant. New formulations have been developed for FSI applications using ALE (Arbitrary Lagrangian Eulerian and mesh free methods as SPH method, (Smooth Particle Hydrodynamic. In defence industry, engineers have been developing protection systems for many years to reduce the vulnerability of light armoured vehicles (LAV against mine blast using classical Lagrangian FEM methods. To improve simulations and assist in the development of these protections, experimental tests, and new numerical techniques are performed. To carry out these numerical calculations, initial conditions such as the loading prescribed by a mine on a structure need to be simulated adequately. The effects of blast on structures depend often on how these initial conditions are estimated and applied. In this report, two methods were used to simulate a mine blast: the classical Lagrangian and the ALE formulations. The comparative study was done for a simple and a more complex target. Particle methods as SPH method can also be used for soil structure interaction.

  3. Methods for computing color anaglyphs

    Science.gov (United States)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  4. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  5. Sensitivity of Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations

    Science.gov (United States)

    2016-06-12

    buried in soil viz., (1) coupled discrete element & particle gas methods ( DEM -PGM) and (2) Arbitrary Lagrangian-Eulerian (ALE), are investigated. The...computational costs, inconsistent robustness and long run times, alternate modeling methods such as Smoothed Particle Hydrodynamics (SPH) [7] and DEM are gaining...DEM_PGM and identify the limitations/strengths compared to the ALE method. Discrete Element Method ( DEM ) can model individual particle directly, and

  6. Forecasting methods for computer technology

    Energy Technology Data Exchange (ETDEWEB)

    Worlton, W.J.

    1978-01-01

    How well the computer site manager avoids future dangers and takes advantage of future opportunities depends to a considerable degree on how much anticipatory information he has available. People who rise in management are expected with each successive promotion to concern themselves with events further in the future. It is the function of technology projection to increase this stock of information about possible future developments in order to put planning and decision making on a more rational basis. Past efforts at computer technology projections have an accuracy that declines exponentially with time. Thus, precisely defined technology projections beyond about three years should be used with considerable caution. This paper reviews both subjective and objective methods of technology projection and gives examples of each. For an integrated view of future prospects in computer technology, a framework for technology projection is proposed.

  7. Computational methods for fluid dynamics

    CERN Document Server

    Ferziger, Joel H

    2002-01-01

    In its 3rd revised and extended edition the book offers an overview of the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. Included are advanced methods in computational fluid dynamics, like direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, free surface flows. The 3rd edition contains a new section dealing with grid quality and an extended description of discretization methods. The book shows common roots and basic principles for many different methods. The book also contains a great deal of practical advice for code developers and users, it is designed to be equally useful to beginners and experts. The issues of numerical accuracy, estimation and reduction of numerical errors are dealt with in detail, with many examples. A full-feature user-friendly demo-version of a commercial CFD software has been added, which ca...

  8. Computer Architecture Performance Evaluation Methods

    CERN Document Server

    Eeckhout, Lieven

    2010-01-01

    Performance evaluation is at the foundation of computer architecture research and development. Contemporary microprocessors are so complex that architects cannot design systems based on intuition and simple models only. Adequate performance evaluation methods are absolutely crucial to steer the research and development process in the right direction. However, rigorous performance evaluation is non-trivial as there are multiple aspects to performanceevaluation, such as picking workloads, selecting an appropriate modeling or simulation approach, running the model and interpreting the results usi

  9. Computational methods for stellerator configurations

    International Nuclear Information System (INIS)

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings

  10. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  11. Computer methods in general relativity: algebraic computing

    CERN Document Server

    Araujo, M E; Skea, J E F; Koutras, A; Krasinski, A; Hobill, D; McLenaghan, R G; Christensen, S M

    1993-01-01

    Karlhede & MacCallum [1] gave a procedure for determining the Lie algebra of the isometry group of an arbitrary pseudo-Riemannian manifold, which they intended to im- plement using the symbolic manipulation package SHEEP but never did. We have recently finished making this procedure explicit by giving an algorithm suitable for implemen- tation on a computer [2]. Specifically, we have written an algorithm for determining the isometry group of a spacetime (in four dimensions), and partially implemented this algorithm using the symbolic manipulation package CLASSI, which is an extension of SHEEP.

  12. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  13. Time-Discrete Higher-Order ALE Formulations: Stability

    KAUST Repository

    Bonito, Andrea

    2013-01-01

    Arbitrary Lagrangian Eulerian (ALE) formulations deal with PDEs on deformable domains upon extending the domain velocity from the boundary into the bulk with the purpose of keeping mesh regularity. This arbitrary extension has no effect on the stability of the PDE but may influence that of a discrete scheme. We examine this critical issue for higher-order time stepping without space discretization. We propose time-discrete discontinuous Galerkin (dG) numerical schemes of any order for a time-dependent advection-diffusion-model problem in moving domains, and study their stability properties. The analysis hinges on the validity of the Reynold\\'s identity for dG. Exploiting the variational structure and assuming exact integration, we prove that our conservative and nonconservative dG schemes are equivalent and unconditionally stable. The same results remain true for piecewise polynomial ALE maps of any degree and suitable quadrature that guarantees the validity of the Reynold\\'s identity. This approach generalizes the so-called geometric conservation law to higher-order methods. We also prove that simpler Runge-Kutta-Radau methods of any order are conditionally stable, that is, subject to a mild ALE constraint on the time steps. Numerical experiments corroborate and complement our theoretical results. © 2013 Society for Industrial and Applied Mathematics.

  14. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  15. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  16. ALE: AES-based lightweight authenticated encryption

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Mendel, Florian; Regazzoni, Francesco

    2014-01-01

    relies on using nonces. We provide an optimized low-area implementation of ALE in ASIC hardware and demonstrate that its area is about 2.5 kGE which is almost two times smaller than that of the lightweight implementations for AES-OCB and ASC-1 using the same lightweight AES engine. At the same time...

  17. Fast vector computation of the characteristics method

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2002-01-01

    Two numerical algorithms, an odd-even sweep (OES) method and an independent sequential sweep (ISS) method, for the fast neutron transport computation by the characteristics method on vector computers have been developed. The neutron tracking procedure is based on the newly devised vectorized sweeps with long vector length, which enables efficient vector processing in comparison with the ordinary forward sweep. Numerical tests have been done on FUJITSU FACOM VPP-5000 for a realistic PWR fuel assembly. The results show that the vector computation with either of the two methods reduces the computation times of one-fifteenth comparing with the scalar computation. Of the two methods, the ISS method is recommended, because the ISS method gives faster convergence and requires smaller memory size than the OES method. (author)

  18. Computational methods for stellerator configurations

    International Nuclear Information System (INIS)

    Betancourt, O.

    1989-01-01

    This project consists of two parallel objectives. On the one hand, computational techniques for three dimensional magnetic confinement configurations were developed or refined and on the other hand, this new techniques were applied to the solution of practical fusion energy problems or the techniques themselves were transferred to other fusion researcher for practical use in the field

  19. Novel methods in computational finance

    CERN Document Server

    Günther, Michael; Maten, E

    2017-01-01

    This book discusses the state-of-the-art and open problems in computational finance. It presents a collection of research outcomes and reviews of the work from the STRIKE project, an FP7 Marie Curie Initial Training Network (ITN) project in which academic partners trained early-stage researchers in close cooperation with a broader range of associated partners, including from the private sector. The aim of the project was to arrive at a deeper understanding of complex (mostly nonlinear) financial models and to develop effective and robust numerical schemes for solving linear and nonlinear problems arising from the mathematical theory of pricing financial derivatives and related financial products. This was accomplished by means of financial modelling, mathematical analysis and numerical simulations, optimal control techniques and validation of models. In recent years the computational complexity of mathematical models employed in financial mathematics has witnessed tremendous growth. Advanced numerical techni...

  20. COMPUTER METHODS OF GENETIC ANALYSIS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available The basic statistical methods used in conducting the genetic analysis of human traits. We studied by segregation analysis, linkage analysis and allelic associations. Developed software for the implementation of these methods support.

  1. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  2. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  3. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea

    2013-03-16

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our estimates hold without any restrictions on the time steps for dG with exact integration or Reynolds\\' quadrature. They involve a mild restriction on the time steps for the practical Runge-Kutta-Radau methods of any order. The key ingredients are the stability results shown earlier in Bonito et al. (Time-discrete higher order ALE formulations: stability, 2013) along with a novel ALE projection. Numerical experiments illustrate and complement our theoretical results. © 2013 Springer-Verlag Berlin Heidelberg.

  4. Computational Methods for Biomolecular Electrostatics

    Science.gov (United States)

    Dong, Feng; Olsen, Brett; Baker, Nathan A.

    2008-01-01

    An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der Waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

  5. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  6. Computational methods for data evaluation and assimilation

    CERN Document Server

    Cacuci, Dan Gabriel

    2013-01-01

    Data evaluation and data combination require the use of a wide range of probability theory concepts and tools, from deductive statistics mainly concerning frequencies and sample tallies to inductive inference for assimilating non-frequency data and a priori knowledge. Computational Methods for Data Evaluation and Assimilation presents interdisciplinary methods for integrating experimental and computational information. This self-contained book shows how the methods can be applied in many scientific and engineering areas. After presenting the fundamentals underlying the evaluation of experiment

  7. Phil@Scale : Computational methods within philosophy

    NARCIS (Netherlands)

    Van Wierst, Pauline; Vrijenhoek, Sanne; Schlobach, Stefan; Betti, Arianna

    2016-01-01

    In this paper we report the results of Phil@Scale, a project directed at the development of computational methods for (the history of) philosophy.1 In this project, philosophers and computer scientists together created SalVe, a tool that helps philosophers answering text-based questions. SalVe has

  8. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  9. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  10. Some methods of computational geometry applied to computer graphics

    NARCIS (Netherlands)

    Overmars, M.H.; Edelsbrunner, H.; Seidel, R.

    1984-01-01

    Abstract Windowing a two-dimensional picture means to determine those line segments of the picture that are visible through an axis-parallel window. A study of some algorithmic problems involved in windowing a picture is offered. Some methods from computational geometry are exploited to store the

  11. An ALE Finite Element Approach for Two-Phase Flow with Phase Change

    Science.gov (United States)

    Gros, Erik; Anjos, Gustavo; Thome, John; Ltcm Team; Gesar Team

    2016-11-01

    In this work, two-phase flow with phase change is investigated through the Finite Element Method (FEM) in the Arbitrary Lagrangian-Eulerian (ALE) framework. The equations are discretized on an unstructured mesh where the interface between the phases is explicitly defined as a sub-set of the mesh. The two-phase interface position is described by a set of interconnected nodes which ensures a sharp representation of the boundary, including the role of the surface tension. The methodology proposed for computing the curvature leads to very accurate results with moderate programming effort and computational costs. Such a methodology can be employed to study accurately many two-phase flow and heat transfer problems in industry such as oil extraction and refinement, design of refrigeration systems, modelling of microfluidic and biological systems and efficient cooling of electronics for computational purposes. The latter is the principal aim of the present research. The numerical results are discussed and compared to analytical solutions and reference results, thereby revealing the capability of the proposed methodology as a platform for the study of two-phase flow with phase change.

  12. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  13. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  14. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  15. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  16. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  17. Zonal methods and computational fluid dynamics

    International Nuclear Information System (INIS)

    Atta, E.H.

    1985-01-01

    Recent advances in developing numerical algorithms for solving fluid flow problems, and the continuing improvement in the speed and storage of large scale computers have made it feasible to compute the flow field about complex and realistic configurations. Current solution methods involve the use of a hierarchy of mathematical models ranging from the linearized potential equation to the Navier Stokes equations. Because of the increasing complexity of both the geometries and flowfields encountered in practical fluid flow simulation, there is a growing emphasis in computational fluid dynamics on the use of zonal methods. A zonal method is one that subdivides the total flow region into interconnected smaller regions or zones. The flow solutions in these zones are then patched together to establish the global flow field solution. Zonal methods are primarily used either to limit the complexity of the governing flow equations to a localized region or to alleviate the grid generation problems about geometrically complex and multicomponent configurations. This paper surveys the application of zonal methods for solving the flow field about two and three-dimensional configurations. Various factors affecting their accuracy and ease of implementation are also discussed. From the presented review it is concluded that zonal methods promise to be very effective for computing complex flowfields and configurations. Currently there are increasing efforts to improve their efficiency, versatility, and accuracy

  18. Proceedings of computational methods in materials science

    International Nuclear Information System (INIS)

    Mark, J.E. Glicksman, M.E.; Marsh, S.P.

    1992-01-01

    The Symposium on which this volume is based was conceived as a timely expression of some of the fast-paced developments occurring throughout materials science and engineering. It focuses particularly on those involving modern computational methods applied to model and predict the response of materials under a diverse range of physico-chemical conditions. The current easy access of many materials scientists in industry, government laboratories, and academe to high-performance computers has opened many new vistas for predicting the behavior of complex materials under realistic conditions. Some have even argued that modern computational methods in materials science and engineering are literally redefining the bounds of our knowledge from which we predict structure-property relationships, perhaps forever changing the historically descriptive character of the science and much of the engineering

  19. Computational botany methods for automated species identification

    CERN Document Server

    Remagnino, Paolo; Wilkin, Paul; Cope, James; Kirkup, Don

    2017-01-01

    This book discusses innovative methods for mining information from images of plants, especially leaves, and highlights the diagnostic features that can be implemented in fully automatic systems for identifying plant species. Adopting a multidisciplinary approach, it explores the problem of plant species identification, covering both the concepts of taxonomy and morphology. It then provides an overview of morphometrics, including the historical background and the main steps in the morphometric analysis of leaves together with a number of applications. The core of the book focuses on novel diagnostic methods for plant species identification developed from a computer scientist’s perspective. It then concludes with a chapter on the characterization of botanists' visions, which highlights important cognitive aspects that can be implemented in a computer system to more accurately replicate the human expert’s fixation process. The book not only represents an authoritative guide to advanced computational tools fo...

  20. Computational structural analysis and finite element methods

    CERN Document Server

    Kaveh, A

    2014-01-01

    Graph theory gained initial prominence in science and engineering through its strong links with matrix algebra and computer science. Moreover, the structure of the mathematics is well suited to that of engineering problems in analysis and design. The methods of analysis in this book employ matrix algebra, graph theory and meta-heuristic algorithms, which are ideally suited for modern computational mechanics. Efficient methods are presented that lead to highly sparse and banded structural matrices. The main features of the book include: application of graph theory for efficient analysis; extension of the force method to finite element analysis; application of meta-heuristic algorithms to ordering and decomposition (sparse matrix technology); efficient use of symmetry and regularity in the force method; and simultaneous analysis and design of structures.

  1. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  2. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  3. Computational Methods in Stochastic Dynamics Volume 2

    CERN Document Server

    Stefanou, George; Papadopoulos, Vissarion

    2013-01-01

    The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology.   This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...

  4. Shifted power method for computing tensor eigenpairs.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  5. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  6. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  7. Computational methods of electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.

    1983-01-01

    A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated

  8. Compatible, energy conserving, bounds preserving remap of hydrodynamic fields for an extended ALE scheme

    Science.gov (United States)

    Burton, D. E.; Morgan, N. R.; Charest, M. R. J.; Kenamond, M. A.; Fung, J.

    2018-02-01

    From the very origins of numerical hydrodynamics in the Lagrangian work of von Neumann and Richtmyer [83], the issue of total energy conservation as well as entropy production has been problematic. Because of well known problems with mesh deformation, Lagrangian schemes have evolved into Arbitrary Lagrangian-Eulerian (ALE) methods [39] that combine the best properties of Lagrangian and Eulerian methods. Energy issues have persisted for this class of methods. We believe that fundamental issues of energy conservation and entropy production in ALE require further examination. The context of the paper is an ALE scheme that is extended in the sense that it permits cyclic or periodic remap of data between grids of the same or differing connectivity. The principal design goals for a remap method then consist of total energy conservation, bounded internal energy, and compatibility of kinetic energy and momentum. We also have secondary objectives of limiting velocity and stress in a non-directional manner, keeping primitive variables monotone, and providing a higher than second order reconstruction of remapped variables. In particular, the new contributions fall into three categories associated with: energy conservation and entropy production, reconstruction and bounds preservation of scalar and tensor fields, and conservative remap of nonlinear fields. The paper presents a derivation of the methods, details of implementation, and numerical results for a number of test problems. The methods requires volume integration of polynomial functions in polytopal cells with planar facets, and the requisite expressions are derived for arbitrary order.

  9. Mathematical optics classical, quantum, and computational methods

    CERN Document Server

    Lakshminarayanan, Vasudevan

    2012-01-01

    Going beyond standard introductory texts, Mathematical Optics: Classical, Quantum, and Computational Methods brings together many new mathematical techniques from optical science and engineering research. Profusely illustrated, the book makes the material accessible to students and newcomers to the field. Divided into six parts, the text presents state-of-the-art mathematical methods and applications in classical optics, quantum optics, and image processing. Part I describes the use of phase space concepts to characterize optical beams and the application of dynamic programming in optical wave

  10. Analytic Method for Computing Instrument Pointing Jitter

    Science.gov (United States)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  11. Svařit, ale neprovařit!

    Czech Academy of Sciences Publication Activity Database

    Řípa, Milan

    Březen (2016) Institutional support: RVO:61389021 Keywords : ITER * vacuum chamber * weld ing Subject RIV: BL - Plasma and Gas Discharge Physics http://www.3pol.cz/cz/rubriky/jaderna-fyzika-a-energetika/1811-svarit-ale-neprovarit

  12. Delamination detection using methods of computational intelligence

    Science.gov (United States)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  13. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  14. Evolutionary Computing Methods for Spectral Retrieval

    Science.gov (United States)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  15. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  16. Deterministic Search Methods for Computational Protein Design.

    Science.gov (United States)

    Traoré, Seydou; Allouche, David; André, Isabelle; Schiex, Thomas; Barbe, Sophie

    2017-01-01

    One main challenge in Computational Protein Design (CPD) lies in the exploration of the amino-acid sequence space, while considering, to some extent, side chain flexibility. The exorbitant size of the search space urges for the development of efficient exact deterministic search methods enabling identification of low-energy sequence-conformation models, corresponding either to the global minimum energy conformation (GMEC) or an ensemble of guaranteed near-optimal solutions. In contrast to stochastic local search methods that are not guaranteed to find the GMEC, exact deterministic approaches always identify the GMEC and prove its optimality in finite but exponential worst-case time. After a brief overview on these two classes of methods, we discuss the grounds and merits of four deterministic methods that have been applied to solve CPD problems. These approaches are based either on the Dead-End-Elimination theorem combined with A* algorithm (DEE/A*), on Cost Function Networks algorithms (CFN), on Integer Linear Programming solvers (ILP) or on Markov Random Fields solvers (MRF). The way two of these methods (DEE/A* and CFN) can be used in practice to identify low-energy sequence-conformation models starting from a pairwise decomposed energy matrix is detailed in this review.

  17. A computational method for sharp interface advection

    Science.gov (United States)

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  18. Computational electromagnetic methods for transcranial magnetic stimulation

    Science.gov (United States)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3

  19. Development and application of computer-aided design methods for cell factory optimization

    DEFF Research Database (Denmark)

    Cardoso, Joao

    and machine learning. The process of creating strains with commercially relevant titers is time consuming and expensive. Computer-aided design (CAD) software can help scientists build better strains by providing models and algorithms that can be used to generate and test hypotheses before implementing them...... on metabolite targets. MARSI designs can be implemented using ALE or CSI. We used MARSI to enumerate metabolite targets in Escherichia coli that could be used to replace experimentally validated gene knockouts. Genetic variability occurs naturally in cells. However, the effects of those variations...... and machine learning tools, we explored the landscape of kcats using multiple enzyme sequences and their chemical reactions....

  20. Benefits of atomic-level processing by quasi-ALE and ALD technique

    Science.gov (United States)

    Honda, M.; Katsunuma, T.; Tabata, M.; Tsuji, A.; Oishi, T.; Hisamatsu, T.; Ogawa, S.; Kihara, Y.

    2017-06-01

    A new technology has been developed using the atomic layer etching (ALE) and atomic layer deposition (ALD) concepts. It has been applied to self-aligned contacts (SAC) and patterning processes, for the sub 7 nm technology generation. In the SAC process, ultra-high selectivity of SiO2 etching towards SiN is required, for which we have developed quasi-ALE technique for SiO2 etching. We were able to significantly improve the trade-off between the etching ability of SiO2 on the micro slit portions and SiN selectivity. Quasi-ALE precisely controls the reaction layer thickness of the surface, by controlling the radical flux and ion flux independently, and hence enables etching at lower ion energies (E i  <  250 eV). On the other hand, in the patterning processes, the shrinking of critical dimensions (CD) without loading is mandatory. Therefore, we developed a new process flow that combines ALD technique and etching. With this method, we were able to achieve CD shrinking at atomic-layer level precision for various patterns, without causing CD loading. In addition, we were also able to uniformly control the CD shrinkage amount across the whole wafer. This is because this technique takes advantage of the deposition step which is independent of the pattern density and the location on the wafer by self-limited reactions.

  1. Computational methods applied to wind tunnel optimization

    Science.gov (United States)

    Lindsay, David

    This report describes computational methods developed for optimizing the nozzle of a three-dimensional subsonic wind tunnel. This requires determination of a shape that delivers flow to the test section, typically with a speed increase of 7 or more and a velocity uniformity of .25% or better, in a compact length without introducing boundary layer separation. The need for high precision, smooth solutions, and three-dimensional modeling required the development of special computational techniques. These include: (1) alternative formulations to Neumann and Dirichlet boundary conditions, to deal with overspecified, ill-posed, or cyclic problems, and to reduce the discrepancy between numerical solutions and boundary conditions; (2) modification of the Finite Element Method to obtain solutions with numerically exact conservation properties; (3) a Matlab implementation of general degree Finite Element solvers for various element designs in two and three dimensions, exploiting vector indexing to obtain optimal efficiency; (4) derivation of optimal quadrature formulas for integration over simplexes in two and three dimensions, and development of a program for semi-automated generation of formulas for any degree and dimension; (5) a modification of a two-dimensional boundary layer formulation to provide accurate flow conservation in three dimensions, and modification of the algorithm to improve stability; (6) development of multi-dimensional spline functions to achieve smoother solutions in three dimensions by post-processing, new three-dimensional elements for C1 basis functions, and a program to assist in the design of elements with higher continuity; and (7) a development of ellipsoidal harmonics and Lame's equation, with generalization to any dimension and a demonstration that Cartesian, cylindrical, spherical, spheroidal, and sphero-conical harmonics are all limiting cases. The report includes a description of the Finite Difference, Finite Volume, and domain remapping

  2. 47 CFR 80.771 - Method of computing coverage.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna...

  3. Symbolic Substitution Methods For Optical Computing

    Science.gov (United States)

    Murdocca, M. J.; Huang, A.

    1989-02-01

    Symbolic substitution is a method of computing based on parallel binary pattern replacement, that can be implemented with simple optical components and regular free-space interconnection schemes. A two-dimensional pattern is searched for in parallel in an array and is replaced with another pattern. Pattern transformation rules can be applied sequentially or in parallel to realize complex functions. When the substitution space is modified to be loge SIT connected for N binary spots, and masks are allowed to customize the system, then optical digital circuits using symbolic substitution for network interconnects can be made nearly as efficient in terms of gate count and circuit depth as conventional arbitrary interconnection schemes allow. We describe an optical setup that requires no more than a fanin and fanout of two using optically nonlinear logic devices and a free space interconnection scheme based on symbolic substitution.

  4. Computational simulation methods for composite fracture mechanics

    Science.gov (United States)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  5. Optical design teaching by computing graphic methods

    Science.gov (United States)

    Vazquez-Molini, D.; Muñoz-Luna, J.; Fernandez-Balbuena, A. A.; Garcia-Botella, A.; Belloni, P.; Alda, J.

    2012-10-01

    One of the key challenges in the teaching of Optics is that students need to know not only the math of the optical design, but also, and more important, to grasp and understand the optics in a three-dimensional space. Having a clear image of the problem to solve is the first step in order to begin to solve that problem. Therefore to achieve that the students not only must know the equation of refraction law but they have also to understand how the main parameters of this law are interacting among them. This should be a major goal in the teaching course. Optical graphic methods are a valuable tool in this way since they have the advantage of visual information and the accuracy of a computer calculation.

  6. Computational Evaluation of the Traceback Method

    Science.gov (United States)

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  7. Computational methods in calculating superconducting current problems

    Science.gov (United States)

    Brown, David John, II

    Various computational problems in treating superconducting currents are examined. First, field inversion in spatial Fourier transform space is reviewed to obtain both one-dimensional transport currents flowing down a long thin tape, and a localized two-dimensional current. The problems associated with spatial high-frequency noise, created by finite resolution and experimental equipment, are presented, and resolved with a smooth Gaussian cutoff in spatial frequency space. Convergence of the Green's functions for the one-dimensional transport current densities is discussed, and particular attention is devoted to the negative effects of performing discrete Fourier transforms alone on fields asymptotically dropping like 1/r. Results of imaging simulated current densities are favorably compared to the original distributions after the resulting magnetic fields undergo the imaging procedure. The behavior of high-frequency spatial noise, and the behavior of the fields with a 1/r asymptote in the imaging procedure in our simulations is analyzed, and compared to the treatment of these phenomena in the published literature. Next, we examine calculation of Mathieu and spheroidal wave functions, solutions to the wave equation in elliptical cylindrical and oblate and prolate spheroidal coordinates, respectively. These functions are also solutions to Schrodinger's equations with certain potential wells, and are useful in solving time-varying superconducting problems. The Mathieu functions are Fourier expanded, and the spheroidal functions expanded in associated Legendre polynomials to convert the defining differential equations to recursion relations. The infinite number of linear recursion equations is converted to an infinite matrix, multiplied by a vector of expansion coefficients, thus becoming an eigenvalue problem. The eigenvalue problem is solved with root solvers, and the eigenvector problem is solved using a Jacobi-type iteration method, after preconditioning the

  8. Computational Studies of Protein Hydration Methods

    Science.gov (United States)

    Morozenko, Aleksandr

    It is widely appreciated that water plays a vital role in proteins' functions. The long-range proton transfer inside proteins is usually carried out by the Grotthuss mechanism and requires a chain of hydrogen bonds that is composed of internal water molecules and amino acid residues of the protein. In other cases, water molecules can facilitate the enzymes catalytic reactions by becoming a temporary proton donor/acceptor. Yet a reliable way of predicting water protein interior is still not available to the biophysics community. This thesis presents computational studies that have been performed to gain insights into the problems of fast and accurate prediction of potential water sites inside internal cavities of protein. Specifically, we focus on the task of attainment of correspondence between results obtained from computational experiments and experimental data available from X-ray structures. An overview of existing methods of predicting water molecules in the interior of a protein along with a discussion of the trustworthiness of these predictions is a second major subject of this thesis. A description of differences of water molecules in various media, particularly, gas, liquid and protein interior, and theoretical aspects of designing an adequate model of water for the protein environment are widely discussed in chapters 3 and 4. In chapter 5, we discuss recently developed methods of placement of water molecules into internal cavities of a protein. We propose a new methodology based on the principle of docking water molecules to a protein body which allows to achieve a higher degree of matching experimental data reported in protein crystal structures than other techniques available in the world of biophysical software. The new methodology is tested on a set of high-resolution crystal structures of oligopeptide-binding protein (OppA) containing a large number of resolved internal water molecules and applied to bovine heart cytochrome c oxidase in the fully

  9. The LOCAL attack: Cryptanalysis of the authenticated encryption scheme ALE

    DEFF Research Database (Denmark)

    Khovratovich, Dmitry; Rechberger, Christian

    2014-01-01

    We show how to produce a forged (ciphertext, tag) pair for the scheme ALE with data and time complexity of 2102 ALE encryptions of short messages and the same number of authentication attempts. We use a differential attack based on a local collision, which exploits the availability of extracted...... state bytes to the adversary. Our approach allows for a time-data complexity tradeoff, with an extreme case of a forgery produced after 2119 attempts and based on a single authenticated message. Our attack is further turned into a state recovery and a universal forgery attack with a time complexity...

  10. Diffusive mesh relaxation in ALE finite element numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dube, E.I.

    1996-06-01

    The theory for a diffusive mesh relaxation algorithm is developed for use in three-dimensional Arbitary Lagrange/Eulerian (ALE) finite element simulation techniques. This mesh relaxer is derived by a variational principle for an unstructured 3D grid using finite elements, and incorporates hourglass controls in the numerical implementation. The diffusive coefficients are based on the geometric properties of the existing mesh, and are chosen so as to allow for a smooth grid that retains the general shape of the original mesh. The diffusive mesh relaxation algorithm is then applied to an ALE code system, and results from several test cases are discussed.

  11. Deposition of HgTe by electrochemical atomic layer epitaxy (EC-ALE)

    CSIR Research Space (South Africa)

    Venkatasamy, V

    2006-04-01

    Full Text Available This paper describes the first instance of HgTe growth by electrochemical atomic layer epitaxy (EC-ALE). EC-ALE is the electrochemical analog of atomic layer epitaxy (ALE) and atomic layer deposition (ALD), all of which are based on the growth...

  12. Computational methods for predicting the response of critical as-built infrastructure to dynamic loads (architectural surety)

    Energy Technology Data Exchange (ETDEWEB)

    Preece, D.S.; Weatherby, J.R.; Attaway, S.W.; Swegle, J.W.; Matalucci, R.V.

    1998-06-01

    Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.

  13. Current status of uncertainty analysis methods for computer models

    International Nuclear Information System (INIS)

    Ishigami, Tsutomu

    1989-11-01

    This report surveys several existing uncertainty analysis methods for estimating computer output uncertainty caused by input uncertainties, illustrating application examples of those methods to three computer models, MARCH/CORRAL II, TERFOC and SPARC. Merits and limitations of the methods are assessed in the application, and recommendation for selecting uncertainty analysis methods is provided. (author)

  14. Computational methods for corpus annotation and analysis

    CERN Document Server

    Lu, Xiaofei

    2014-01-01

    This book reviews computational tools for lexical, syntactic, semantic, pragmatic and discourse analysis, with instructions on how to obtain, install and use each tool. Covers studies using Natural Language Processing, and offers ideas for better integration.

  15. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  16. New or improved computational methods and advanced reactor design

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  17. Computational structural biology: methods and applications

    National Research Council Canada - National Science Library

    Schwede, Torsten; Peitsch, Manuel Claude

    2008-01-01

    ... sequencing reinforced the observation that structural information is needed to understand the detailed function and mechanism of biological molecules such as enzyme reactions and molecular recognition events. Furthermore, structures are obviously key to the design of molecules with new or improved functions. In this context, computational structural biology...

  18. Computer Literacy Systematic Literature Review Method

    NARCIS (Netherlands)

    Kegel, Roeland Hendrik,Pieter; Barth, Susanne; Klaassen, Randy; Wieringa, Roelf J.

    2017-01-01

    Although there have been many attempts to define the concept `computer literacy', no consensus has been reached: many variations of the concept exist within literature. The majority of papers does not explicitly define the concept at all, instead using an unjustified subset of elements related to

  19. Computational Methods for Problems in Fluid Dynamics

    Science.gov (United States)

    1989-02-01

    remedy this disadvantage, multigrid methods combine basic iterative methods with other methods that are complementary. One of the reasons that accounts...for the effectiveness of multigrid methods seems to be the idea of approximating the solution of a large system from a subspace wi,ose dinension is...1988]) and multigrid methods (Hackbusch (1985). McCormick [1 9P71) incorporate into their overall solution strategies an additive correction algorithm of

  20. Studying Kv Channels Function using Computational Methods.

    Science.gov (United States)

    Deyawe, Audrey; Kasimova, Marina A; Delemotte, Lucie; Loussouarn, Gildas; Tarek, Mounir

    2018-01-01

    In recent years, molecular modeling techniques, combined with MD simulations, provided significant insights on voltage-gated (Kv) potassium channels intrinsic properties. Among the success stories are the highlight of molecular level details of the effects of mutations, the unraveling of several metastable intermediate states, and the influence of a particular lipid, PIP 2 , in the stability and the modulation of Kv channel function. These computational studies offered a detailed view that could not have been reached through experimental studies alone. With the increase of cross disciplinary studies, numerous experiments provided validation of these computational results, which endows an increase in the reliability of molecular modeling for the study of Kv channels. This chapter offers a description of the main techniques used to model Kv channels at the atomistic level.

  1. Statistical methods and computing for big data.

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.

  2. Statistical methods and computing for big data

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  3. ALE3D Simulation and Measurement of Violence in a Fast Cookoff Experiment with LX-10

    Energy Technology Data Exchange (ETDEWEB)

    McClelland, M A; Maienschein, J L; Howard, W M; deHaven, M R

    2006-11-22

    We performed a computational and experimental analysis of fast cookoff of LX-10 (94.7% HMX, 5.3% Viton A) confined in a 2 kbar steel tube with reinforced end caps. A Scaled-Thermal-Explosion-eXperiment (STEX) was completed in which three radiant heaters were used to heat the vessel until ignition, resulting in a moderately violent explosion after 20.4 minutes. Thermocouple measurements showed tube temperatures as high as 340 C at ignition and LX-10 surface temperatures as high as 279 C, which is near the melting point of HMX. Three micro-power radar systems were used to measure mean fragment velocities of 840 m/s. Photonics Doppler Velocimeters (PDVs) showed a rapid acceleration of fragments over 80 {micro}s. A one-dimensional ALE3D cookoff model at the vessel midplane was used to simulate the heating, thermal expansion, LX-10 decomposition composition, and closing of the gap between the HE (High Explosive) and vessel wall. Although the ALE3D simulation terminated before ignition, the model provided a good representation of heat transfer through the case and across the dynamic gap to the explosive.

  4. Caller behaviour classification using computational intelligence methods.

    Science.gov (United States)

    Patel, Pretesh B; Marwala, Tshilidzi

    2010-02-01

    A classification system that accurately categorizes caller interaction within Interactive Voice Response systems is essential in determining caller behaviour. Field and call performance classifier for pay beneficiary application are developed. Genetic Algorithms, Multi-Layer Perceptron neural network, Radial Basis Function neural network, Fuzzy Inference Systems and Support Vector Machine computational intelligent techniques were considered in this research. Exceptional results were achieved. Classifiers with accuracy values greater than 90% were developed. The preferred models for field 'Say amount', 'Say confirmation' and call performance classification are the ensemble of classifiers. However, the Multi-Layer Perceptron classifiers performed the best in field 'Say account' and 'Select beneficiary' classification.

  5. Computational methods for two-phase flow and particle transport

    CERN Document Server

    Lee, Wen Ho

    2013-01-01

    This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.

  6. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  7. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  8. 12 CFR 227.25 - Unfair balance computation method.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25 Section 227.25 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in paragraph...

  9. An Augmented Fast Marching Method for Computing Skeletons and Centerlines

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2002-01-01

    We present a simple and robust method for computing skeletons for arbitrary planar objects and centerlines for 3D objects. We augment the Fast Marching Method (FMM) widely used in level set applications by computing the paramterized boundary location every pixel came from during the boundary

  10. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  11. Near threshold computing technology, methods and applications

    CERN Document Server

    Silvano, Cristina

    2016-01-01

    This book explores near-threshold computing (NTC), a design-space using techniques to run digital chips (processors) near the lowest possible voltage.  Readers will be enabled with specific techniques to design chips that are extremely robust; tolerating variability and resilient against errors.  Variability-aware voltage and frequency allocation schemes will be presented that will provide performance guarantees, when moving toward near-threshold manycore chips.  ·         Provides an introduction to near-threshold computing, enabling reader with a variety of tools to face the challenges of the power/utilization wall; ·         Demonstrates how to design efficient voltage regulation, so that each region of the chip can operate at the most efficient voltage and frequency point; ·         Investigates how performance guarantees can be ensured when moving towards NTC manycores through variability-aware voltage and frequency allocation schemes.  .

  12. Tensor network method for reversible classical computation

    Science.gov (United States)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  13. An experimental unification of reservoir computing methods.

    Science.gov (United States)

    Verstraeten, D; Schrauwen, B; D'Haene, M; Stroobandt, D

    2007-04-01

    Three different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) learning rule. Individual descriptions of these techniques exist, but a overview is still lacking. Here, we present a series of experimental results that compares all three implementations, and draw conclusions about the relation between a broad range of reservoir parameters and network dynamics, memory, node complexity and performance on a variety of benchmark tests with different characteristics. Next, we introduce a new measure for the reservoir dynamics based on Lyapunov exponents. Unlike previous measures in the literature, this measure is dependent on the dynamics of the reservoir in response to the inputs, and in the cases we tried, it indicates an optimal value for the global scaling of the weight matrix, irrespective of the standard measures. We also describe the Reservoir Computing Toolbox that was used for these experiments, which implements all the types of Reservoir Computing and allows the easy simulation of a wide range of reservoir topologies for a number of benchmarks.

  14. Analytical and computational methods in electromagnetics

    CERN Document Server

    Garg, Ramesh

    2008-01-01

    This authoritative resource offers you clear and complete explanation of this essential electromagnetics knowledge, providing you with the analytical background you need to understand such key approaches as MoM (method of moments), FDTD (Finite Difference Time Domain) and FEM (Finite Element Method), and Green's functions. This comprehensive book includes all math necessary to master the material.

  15. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  16. Isolation of an osmotolerant ale strain of Saccharomyces cerevisiae.

    Science.gov (United States)

    Pironcheva, G

    1998-01-01

    Saccharomyces cerevisiae (ale strain) grown in batch culture to stationary phase was tested for its tolerance to heat (50 degrees C for 5 min), hydrogen peroxide (0.3 M) and salt (growth in 1.5 M sodium chloride/YPD medium). Yeast cells which have been exposed previously to heat shock are more tolerant to hydrogen peroxide and high salt concentrations (1.5 M NaCl) than the controls. Their fermentative activity as judged by glucose consumption and their viability, as judged by cell number and density have higher levels when compared with cells not previously exposed to heat shock. Experimental conditions facilitated the isolation of S. cerevisiae ale strain, which was tolerant to heat, and other agents such as hydrogen peroxide and sodium chloride.

  17. Affective mapping: An activation likelihood estimation (ALE) meta-analysis.

    Science.gov (United States)

    Kirby, Lauren A J; Robinson, Jennifer L

    2017-11-01

    Functional neuroimaging has the spatial resolution to explain the neural basis of emotions. Activation likelihood estimation (ALE), as opposed to traditional qualitative meta-analysis, quantifies convergence of activation across studies within affective categories. Others have used ALE to investigate a broad range of emotions, but without the convenience of the BrainMap database. We used the BrainMap database and analysis resources to run separate meta-analyses on coordinates reported for anger, anxiety, disgust, fear, happiness, humor, and sadness. Resultant ALE maps were compared to determine areas of convergence between emotions, as well as to identify affect-specific networks. Five out of the seven emotions demonstrated consistent activation within the amygdala, whereas all emotions consistently activated the right inferior frontal gyrus, which has been implicated as an integration hub for affective and cognitive processes. These data provide the framework for models of affect-specific networks, as well as emotional processing hubs, which can be used for future studies of functional or effective connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Supersymmetric 3-branes on smooth ALE manifolds with flux

    International Nuclear Information System (INIS)

    Bertolini, M.; Campos, V.L.; Ferretti, G.; Fre, P.; Salomonson, P.; Trigiante, M.

    2001-01-01

    We construct a new family of classical BPS solutions of type IIB supergravity describing 3-branes transverse to a 6-dimensional space with topology R 2 xALE. They are characterized by a non-trivial flux of the supergravity 2-forms through the homology 2-cycles of a generic smooth ALE manifold. Our solutions have two Killing spinors and thus preserve N=2 supersymmetry. They are expressed in terms of a quasi harmonic function H (the 'warp factor'), whose properties we study in the case of the simplest ALE, namely the Eguchi-Hanson manifold. The equation for H is identified as an instance of the confluent Heun equation. We write explicit power series solutions and solve the recurrence relation for the coefficients, discussing also the relevant asymptotic expansions. While, as in all such N=2 solutions, supergravity breaks down near the brane, the smoothing out of the vacuum geometry has the effect that the warp factor is regular in a region near the cycle. We interpret the behavior of the warp factor as describing a three-brane charge 'smeared' over the cycle and consider the asymptotic form of the geometry in that region, showing that conformal invariance is broken even when the complex type IIB 3-form field strength is assumed to vanish. We conclude with a discussion of the basic features of the gauge theory dual

  19. Computer methods in physics 250 problems with guided solutions

    CERN Document Server

    Landau, Rubin H

    2018-01-01

    Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). It’s also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.

  20. Instrument design optimization with computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Michael H. [Old Dominion Univ., Norfolk, VA (United States)

    2017-08-01

    Using Finite Element Analysis to approximate the solution of differential equations, two different instruments in experimental Hall C at the Thomas Jefferson National Accelerator Facility are analyzed. The time dependence of density uctuations from the liquid hydrogen (LH2) target used in the Qweak experiment (2011-2012) are studied with Computational Fluid Dynamics (CFD) and the simulation results compared to data from the experiment. The 2.5 kW liquid hydrogen target was the highest power LH2 target in the world and the first to be designed with CFD at Jefferson Lab. The first complete magnetic field simulation of the Super High Momentum Spectrometer (SHMS) is presented with a focus on primary electron beam deflection downstream of the target. The SHMS consists of a superconducting horizontal bending magnet (HB) and three superconducting quadrupole magnets. The HB allows particles scattered at an angle of 5:5 deg to the beam line to be steered into the quadrupole magnets which make up the optics of the spectrometer. Without mitigation, remnant fields from the SHMS may steer the unscattered beam outside of the acceptable envelope on the beam dump and limit beam operations at small scattering angles. A solution is proposed using optimal placement of a minimal amount of shielding iron around the beam line.

  1. Electromagnetic computation methods for lightning surge protection studies

    CERN Document Server

    Baba, Yoshihiro

    2016-01-01

    This book is the first to consolidate current research and to examine the theories of electromagnetic computation methods in relation to lightning surge protection. The authors introduce and compare existing electromagnetic computation methods such as the method of moments (MOM), the partial element equivalent circuit (PEEC), the finite element method (FEM), the transmission-line modeling (TLM) method, and the finite-difference time-domain (FDTD) method. The application of FDTD method to lightning protection studies is a topic that has matured through many practical applications in the past decade, and the authors explain the derivation of Maxwell's equations required by the FDTD, and modeling of various electrical components needed in computing lightning electromagnetic fields and surges with the FDTD method. The book describes the application of FDTD method to current and emerging problems of lightning surge protection of continuously more complex installations, particularly in critical infrastructures of e...

  2. Comparison of Five Computational Methods for Computing Q Factors in Photonic Crystal Membrane Cavities

    DEFF Research Database (Denmark)

    Novitsky, Andrey; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2017-01-01

    Five state-of-the-art computational methods are benchmarked by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities. The convergence of the methods with respect to resolution, degrees of freedom and number of modes is investigated. Specia...

  3. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Three numerical methods for the computation of the electrostatic energy

    International Nuclear Information System (INIS)

    Poenaru, D.N.; Galeriu, D.

    1975-01-01

    The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended

  5. Computational Methods for Physicists Compendium for Students

    CERN Document Server

    Sirca, Simon

    2012-01-01

    This book helps advanced undergraduate, graduate and postdoctoral students in their daily work by offering them a compendium of numerical methods. The choice of methods pays  significant attention to error estimates, stability and convergence issues as well as to the ways to optimize program execution speeds. Many examples are given throughout the chapters, and each chapter is followed by at least a handful of more comprehensive problems which may be dealt with, for example, on a weekly basis in a one- or two-semester course. In these end-of-chapter problems the physics background is pronounced, and the main text preceding them is intended as an introduction or as a later reference. Less stress is given to the explanation of individual algorithms. It is tried to induce in the reader an own independent thinking and a certain amount of scepticism and scrutiny instead of blindly following readily available commercial tools.

  6. A connectionist computational method for face recognition

    Directory of Open Access Journals (Sweden)

    Pujol Francisco A.

    2016-06-01

    Full Text Available In this work, a modified version of the elastic bunch graph matching (EBGM algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.

  7. Computing Method of Forces on Rivet

    Directory of Open Access Journals (Sweden)

    Ion DIMA

    2014-03-01

    Full Text Available This article aims to provide a quick methodology of forces calculation on rivet in single shear using the finite element method (FEM – NASTRAN/PATRAN. These forces can be used for the calculus of bearing, inter rivet buckling and riveting check. For this method to be efficient and fast, a macro has been developed based on this methodology described in the article. The macro was wrote in Visual Basic with Excel interface. In the beginning phase of any aircraft project, when the rivets type and position are not yet precisely known, the modelling of rivets, as attachment elements between items, is made node on node in the finite element model, without taking account of the rivets position. Although the rivets are not modelled in the finite element model, this method together with the macro enable a quick extraction and calculation of the forces on the rivet. This calculation of forces on rivet is intended to critical case, selected from the stress plots of NASTRAN for max. /min. principal stress and shear.

  8. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  9. Customizing computational methods for visual analytics with big data.

    Science.gov (United States)

    Choo, Jaegul; Park, Haesun

    2013-01-01

    The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data.

  10. Developing a multimodal biometric authentication system using soft computing methods.

    Science.gov (United States)

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  11. Method to Compute CT System MTF

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-03

    The modulation transfer function (MTF) is the normalized spatial frequency representation of the point spread function (PSF) of the system. Point objects are hard to come by, so typically the PSF is determined by taking the numerical derivative of the system's response to an edge. This is the method we use, and we typically use it with cylindrical objects. Given a cylindrical object, we first put an active contour around it, as shown in Figure 1(a). The active contour lets us know where the boundary of the test object is. We next set a threshold (Figure 1(b)) and determine the center of mass of the above threshold voxels. For the purposes of determining the center of mass, each voxel is weighted identically (not by voxel value).

  12. Advanced Methods and Applications in Computational Intelligence

    CERN Document Server

    Nikodem, Jan; Jacak, Witold; Chaczko, Zenon; ACASE 2012

    2014-01-01

    This book offers an excellent presentation of intelligent engineering and informatics foundations for researchers in this field as well as many examples with industrial application. It contains extended versions of selected papers presented at the inaugural ACASE 2012 Conference dedicated to the Applications of Systems Engineering. This conference was held from the 6th to the 8th of February 2012, at the University of Technology, Sydney, Australia, organized by the University of Technology, Sydney (Australia), Wroclaw University of Technology (Poland) and the University of Applied Sciences in Hagenberg (Austria). The  book is organized into three main parts. Part I contains papers devoted to the heuristic approaches that are applicable in situations where the problem cannot be solved by exact methods, due to various characteristics or  dimensionality problems. Part II covers essential issues of the network management, presents intelligent models of the next generation of networks and distributed systems ...

  13. Computer systems and methods for visualizing data

    Science.gov (United States)

    Stolte, Chris; Hanrahan, Patrick

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  14. Computational Simulations and the Scientific Method

    Science.gov (United States)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  15. Mathematical and computational methods in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Dehesa, J.S.; Gomez, J.M.G.; Polls, A.

    1983-01-01

    The lectures, covering various aspects of the many-body problem in nuclei, review present knowledge and include some unpublished material as well. Bohigas and Giannoni discuss the fluctuation properties of spectra of many-body systems by means of random matrix theories, and the attempts to search for quantum mechanical manifestations of classical chaotic motion. The role of spectral distributions (expressed as explicit functions of the microscopic matrix elements of the Hamiltonian) in the statistical spectroscopy of nuclear systems is analyzed by French. Zucker, after a brief review of the theoretical basis of the shell model, discusses a reformulation of the theory of effective interactions and gives a survey of the linked cluster theory. Goeke's lectures center on the mean-field methods, particularly TDHF, used in the investigation of the large-amplitude nuclear collective motion, pointing out both the successes and failures of the theory.

  16. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  17. Computation of saddle-type slow manifolds using iterative methods

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2015-01-01

    This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...... with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the Fitz...

  18. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  19. Platform-independent method for computer aided schematic drawings

    Science.gov (United States)

    Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  20. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  1. Method and computer program product for maintenance and modernization backlogging

    Science.gov (United States)

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  2. Fibonacci’s Computation Methods vs Modern Algorithms

    Directory of Open Access Journals (Sweden)

    Ernesto Burattini

    2013-12-01

    Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.

  3. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  4. Computer science handbook. Vol. 13.3. Environmental computer science. Computer science methods for environmental protection and environmental research

    International Nuclear Information System (INIS)

    Page, B.; Hilty, L.M.

    1994-01-01

    Environmental computer science is a new partial discipline of applied computer science, which makes use of methods and techniques of information processing in environmental protection. Thanks to the inter-disciplinary nature of environmental problems, computer science acts as a mediator between numerous disciplines and institutions in this sector. The handbook reflects the broad spectrum of state-of-the art environmental computer science. The following important subjects are dealt with: Environmental databases and information systems, environmental monitoring, modelling and simulation, visualization of environmental data and knowledge-based systems in the environmental sector. (orig.) [de

  5. Research data collection methods: from paper to tablet computers.

    Science.gov (United States)

    Wilcox, Adam B; Gallagher, Kathleen D; Boden-Albala, Bernadette; Bakken, Suzanne R

    2012-07-01

    Primary data collection is a critical activity in clinical research. Even with significant advances in technical capabilities, clear benefits of use, and even user preferences for using electronic systems for collecting primary data, paper-based data collection is still common in clinical research settings. However, with recent developments in both clinical research and tablet computer technology, the comparative advantages and disadvantages of data collection methods should be determined. To describe case studies using multiple methods of data collection, including next-generation tablets, and consider their various advantages and disadvantages. We reviewed 5 modern case studies using primary data collection, using methods ranging from paper to next-generation tablet computers. We performed semistructured telephone interviews with each project, which considered factors relevant to data collection. We address specific issues with workflow, implementation and security for these different methods, and identify differences in implementation that led to different technology considerations for each case study. There remain multiple methods for primary data collection, each with its own strengths and weaknesses. Two recent methods are electronic health record templates and next-generation tablet computers. Electronic health record templates can link data directly to medical records, but are notably difficult to use. Current tablet computers are substantially different from previous technologies with regard to user familiarity and software cost. The use of cloud-based storage for tablet computers, however, creates a specific challenge for clinical research that must be considered but can be overcome.

  6. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  7. Aliança estratégica no canal de marketing: o caso ALE Combustíveis S.A. Strategic alliance in the marketing channel: the Ale Combustíveis S.A. case

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Garcia Cotta

    2010-01-01

    Full Text Available O artigo analisa a estratégia utilizada pela ALE Combustíveis em uma operação desenhada para vender lubrificantes automotivos em sua rede de postos. O estudo avalia as alianças efetuadas com a Elf e posteriormente com a AC Delco, revelando as motivações, escolha dos parceiros, desenho do modelo de relacionamento, gestão das alianças e avaliação do modelo adotado, confrontando a experiência prática com as prescrições da literatura. O modelo adotado, denominado broker, caracteriza-se pela preservação da autonomia das marcas, com a ALE aportando sua estrutura e força de vendas, e o parceiro, a logística de reabastecimento e processamento de pedidos. O método do caso foi adotado como estratégia de pesquisa e o relato revela o acerto na escolha do modelo, porém falhas estratégicas na capacidade operacional e posicionamento do produto, na aliança com a Elf, e na relação de forças no canal, na aliança com a AC Delco, conduziram ao fracasso de ambas as tentativas.This paper analyses the strategy adopted by ALE Combustíveis, a Brazilian Company, in an operation designed to sell automotive lubricants at gas stations. This study reviews the alliances made with Elf and later with AC Delco, exposing ALE's motivations, partner selection, design of relationship model, alliance management and assessment of adopted model, confronting practical experience with prescriptions in the published literature. The model, named broker, is characterized by preservation of the brand's autonomy, with ALE contributing its structure and sales force, and its partner, in turn, the fuelling logistics and order processing. The case method was adopted as a research strategy and this report shows how successful the model selection proved, as well as the strategic blunders related to the operational capacity and product positioning, in the alliance with Elf, and in the forces of management interrelations within the channel, in the AC Delco alliance case

  8. Methods of Computer Algebra and the Many Bodies Algebra

    Science.gov (United States)

    Grebenikov, E. A.; Kozak-Skoworodkina, D.; Yakubiak, M.

    2001-07-01

    The monograph concerns with qualitative methoids in n>3 bodies restricted problems by methods of computer algebra. The book consists of 4 chapters. The first two chapters contain the theory of homographic solutions in the many bodies problem. Other two chapters concern with Lyapunov stability of new solutions of differential equations based on KAM -theory. The computer method of the Birkhoff's normalisation method of the hamiltonians for the restricted 4, 5, 6, and 7 bodies is presented in detail. The book is designed for scientific researchers, doctorants, and students of the Physical-Mathematical departments. It could be used as well in University courses of qualitative theory of differential equations.

  9. Computational method for discovery of estrogen responsive genes

    DEFF Research Database (Denmark)

    Tang, Suisheng; Tan, Sin Lam; Ramadoss, Suresh Kumar

    2004-01-01

    of human genes are functionally well characterized. It is still unclear how many and which human genes respond to estrogen treatment. We propose a simple, economic, yet effective computational method to predict a subclass of estrogen responsive genes. Our method relies on the similarity of ERE frames...

  10. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-09-19

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  11. Geometric optical transfer function and tis computation method

    International Nuclear Information System (INIS)

    Wang Qi

    1992-01-01

    Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation

  12. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  13. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  14. Data analysis through interactive computer animation method (DATICAM)

    International Nuclear Information System (INIS)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process

  15. Multigrid methods for the computation of propagators in gauge fields

    International Nuclear Information System (INIS)

    Kalkreuter, T.

    1992-11-01

    In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. We discuss proper averaging operations for bosons and for staggered fermions. An efficient algorithm for computing C numerically is presented. The averaging kernels C can be used not only in deterministic multigrid computations, but also in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies of gauge theories. Actual numerical computations of kernels and propagators are performed in compact four-dimensional SU(2) gauge fields. (orig./HSI)

  16. Analysing Interlanguage Stages ALEs Pass through in the Acquisition of the Simple Past Tense

    Science.gov (United States)

    Mourssi, Anwar

    2012-01-01

    Building on previous studies of cross-linguistic influence (CLI) on SLA, and principled criteria for confirming its existence in L2 data, an empirical study was run on 74 Arab learners of English (ALEs). A detailed analysis was made of interlanguage stages of the simple past tense forms in 222 written texts produced by ALEs in the classroom…

  17. Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics

    International Nuclear Information System (INIS)

    Luo, Hong; Xia, Yidong; Nourgaliev, Robert

    2011-01-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)

  18. Implementation errors in the GingerALE Software: Description and recommendations.

    Science.gov (United States)

    Eickhoff, Simon B; Laird, Angela R; Fox, P Mickle; Lancaster, Jack L; Fox, Peter T

    2017-01-01

    Neuroscience imaging is a burgeoning, highly sophisticated field the growth of which has been fostered by grant-funded, freely distributed software libraries that perform voxel-wise analyses in anatomically standardized three-dimensional space on multi-subject, whole-brain, primary datasets. Despite the ongoing advances made using these non-commercial computational tools, the replicability of individual studies is an acknowledged limitation. Coordinate-based meta-analysis offers a practical solution to this limitation and, consequently, plays an important role in filtering and consolidating the enormous corpus of functional and structural neuroimaging results reported in the peer-reviewed literature. In both primary data and meta-analytic neuroimaging analyses, correction for multiple comparisons is a complex but critical step for ensuring statistical rigor. Reports of errors in multiple-comparison corrections in primary-data analyses have recently appeared. Here, we report two such errors in GingerALE, a widely used, US National Institutes of Health (NIH)-funded, freely distributed software package for coordinate-based meta-analysis. These errors have given rise to published reports with more liberal statistical inferences than were specified by the authors. The intent of this technical report is threefold. First, we inform authors who used GingerALE of these errors so that they can take appropriate actions including re-analyses and corrective publications. Second, we seek to exemplify and promote an open approach to error management. Third, we discuss the implications of these and similar errors in a scientific environment dependent on third-party software. Hum Brain Mapp 38:7-11, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  20. Methods for simulation-based analysis of fluid-structure interaction.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  1. The spectral-element method, Beowulf computing, and global seismology.

    Science.gov (United States)

    Komatitsch, Dimitri; Ritsema, Jeroen; Tromp, Jeroen

    2002-11-29

    The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables us to simulate broadband seismograms without intrinsic restrictions on the level of heterogeneity or the frequency content.

  2. Benchmark of multi-phase method for the computation of fast ion distributions in a tokamak plasma in the presence of low-amplitude resonant MHD activity

    Science.gov (United States)

    Bierwage, A.; Todo, Y.

    2017-11-01

    The transport of fast ions in a beam-driven JT-60U tokamak plasma subject to resonant magnetohydrodynamic (MHD) mode activity is simulated using the so-called multi-phase method, where 4 ms intervals of classical Monte-Carlo simulations (without MHD) are interlaced with 1 ms intervals of hybrid simulations (with MHD). The multi-phase simulation results are compared to results obtained with continuous hybrid simulations, which were recently validated against experimental data (Bierwage et al., 2017). It is shown that the multi-phase method, in spite of causing significant overshoots in the MHD fluctuation amplitudes, accurately reproduces the frequencies and positions of the dominant resonant modes, as well as the spatial profile and velocity distribution of the fast ions, while consuming only a fraction of the computation time required by the continuous hybrid simulation. The present paper is limited to low-amplitude fluctuations consisting of a few long-wavelength modes that interact only weakly with each other. The success of this benchmark study paves the way for applying the multi-phase method to the simulation of Abrupt Large-amplitude Events (ALE), which were seen in the same JT-60U experiments but at larger time intervals. Possible implications for the construction of reduced models for fast ion transport are discussed.

  3. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  4. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  5. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  6. Determinant Computation on the GPU using the Condensation Method

    International Nuclear Information System (INIS)

    Haque, Sardar Anisul; Maza, Marc Moreno

    2012-01-01

    We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.

  7. Measuring coherence of computer-assisted likelihood ratio methods.

    Science.gov (United States)

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Computer-implemented gaze interaction method and apparatus

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of communicating via interaction with a user-interface based on a person's gaze and gestures, comprising: computing an estimate of the person's gaze comprising computing a point-of-regard on a display through which the person observes a scene in front of him; by means...... of a scene camera, capturing a first image of a scene in front of the person's head (and at least partially visible on the display) and computing the location of an object coinciding with the person's gaze; by means of the scene camera, capturing at least one further image of the scene in front of the person......'s head, and monitoring whether the gaze dwells on the recognised object; and while gaze dwells on the recognised object: firstly, displaying a user interface element, with a spatial expanse, on the display face in a region adjacent to the point-of-regard; and secondly, during movement of the display...

  9. Costs evaluation methodic of energy efficient computer network reengineering

    Directory of Open Access Journals (Sweden)

    S.A. Nesterenko

    2016-09-01

    Full Text Available A key direction of modern computer networks reengineering is their transfer to a new energy-saving technology IEEE 802.3az. To make a reasoned decision about the transition to the new technology is needed a technique that allows network engineers to answer the question about the economic feasibility of a network upgrade. Aim: The aim of this research is development of methodic for calculating the cost-effectiveness of energy-efficient computer network reengineering. Materials and Methods: The methodic uses analytical models for calculating power consumption of a computer network port operating in IEEE 802.3 standard and energy-efficient mode of IEEE 802.3az standard. For frame transmission time calculation in the communication channel used the queuing model. To determine the values of the network operation parameters proposed to use multiagent network monitoring method. Results: The methodic allows calculating the economic impact of a computer network transfer to energy-saving technology IEEE 802.3az. To determine the network performance parameters proposed to use network SNMP monitoring systems based on RMON MIB agents.

  10. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  11. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  12. Erratum to: A computational method for the solution of one ...

    Indian Academy of Sciences (India)

    Erratum to: A computational method for the solution of one-dimensional nonlinear thermoelasticity. M MIRZAZADEH1,∗, M ESLAMI2 and ANJAN BISWAS3,4. 1Department of Engineering Sciences, Faculty of Technology and Engineering,. University of Guilan, East of Guilan, Rudsar, Iran. 2Department of Mathematics ...

  13. Convergence acceleration of the Proteus computer code with multigrid methods

    Science.gov (United States)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  14. Recent development in methods for electron optical computations

    Czech Academy of Sciences Publication Activity Database

    Lencová, Bohumila

    2001-01-01

    Roč. 93, č. 6 (2001), s. 434-435 ISSN 0248-4900 Institutional research plan: CEZ:AV0Z2065902 Keywords : electron optical computations * finite element method Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.829, year: 2001

  15. A hyperpower iterative method for computing the generalized Drazin ...

    Indian Academy of Sciences (India)

    Shwetabh Srivastava

    ... method for computing the generalized Drazin inverse of Banach algebra element. SHWETABH SRIVASTAVA1,*, DHARMENDRA K GUPTA2, PREDRAG STANIMIROVIC´3,. SUKHJIT SINGH4 and FALGUNI ROY2. 1 Department of Mathematics, School of Arts & Sciences, Amrita Vishwa Vidyapeetham, Amrita University,.

  16. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  17. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  18. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  19. Numerical evaluation of methods for computing tomographic projections

    International Nuclear Information System (INIS)

    Zhuang, W.; Gopal, S.S.; Hebert, T.J.

    1994-01-01

    Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived

  20. A Parallel Iterative Method for Computing Molecular Absorption Spectra.

    Science.gov (United States)

    Koval, Peter; Foerster, Dietrich; Coulaud, Olivier

    2010-09-14

    We describe a fast parallel iterative method for computing molecular absorption spectra within TDDFT linear response and using the LCAO method. We use a local basis of "dominant products" to parametrize the space of orbital products that occur in the LCAO approach. In this basis, the dynamic polarizability is computed iteratively within an appropriate Krylov subspace. The iterative procedure uses a matrix-free GMRES method to determine the (interacting) density response. The resulting code is about 1 order of magnitude faster than our previous full-matrix method. This acceleration makes the speed of our TDDFT code comparable with codes based on Casida's equation. The implementation of our method uses hybrid MPI and OpenMP parallelization in which load balancing and memory access are optimized. To validate our approach and to establish benchmarks, we compute spectra of large molecules on various types of parallel machines. The methods developed here are fairly general, and we believe they will find useful applications in molecular physics/chemistry, even for problems that are beyond TDDFT, such as organic semiconductors, particularly in photovoltaics.

  1. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  2. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  3. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  4. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  5. A granular computing method for nonlinear convection-diffusion equation

    Directory of Open Access Journals (Sweden)

    Tian Ya Lan

    2016-01-01

    Full Text Available This paper introduces a method of solving nonlinear convection-diffusion equation (NCDE, based on the combination of granular computing (GrC and characteristics finite element method (CFEM. The key idea of the proposed method (denoted as GrC-CFEM is to reconstruct the solution from coarse-grained layer to fine-grained layer. It first gets the nonlinear solution on the coarse-grained layer, and then the function (Taylor expansion is applied to linearize the NCDE on the fine-grained layer. Switch to the fine-grained layer, the linear solution is directly derived from the nonlinear solution. The full nonlinear problem is solved only on the coarse-grained layer. Numerical experiments show that the GrC-CFEM can accelerate the convergence and improve the computational efficiency without sacrificing the accuracy.

  6. A Review of Computational Intelligence Methods for Eukaryotic Promoter Prediction.

    Science.gov (United States)

    Singh, Shailendra; Kaur, Sukhbir; Goel, Neelam

    2015-01-01

    In past decades, prediction of genes in DNA sequences has attracted the attention of many researchers but due to its complex structure it is extremely intricate to correctly locate its position. A large number of regulatory regions are present in DNA that helps in transcription of a gene. Promoter is one such region and to find its location is a challenging problem. Various computational methods for promoter prediction have been developed over the past few years. This paper reviews these promoter prediction methods. Several difficulties and pitfalls encountered by these methods are also detailed, along with future research directions.

  7. Sensitivity of solutions computed through the Asymptotic Numerical Method

    Science.gov (United States)

    Charpentier, Isabelle

    2008-10-01

    The Asymptotic Numerical Method (ANM) allows one to compute solution branches of sufficiently smooth non-linear PDE problems using truncated Taylor expansions. The Diamant approach of the ANM has been proposed for hiding definitively the differentiation aspects to the user. In this Note, this significant improvement in terms of genericity is exploited to compute the sensitivity of ANM solutions with respect to modelling parameters. The differentiation in the parameters is discussed at both the equation and code level to highlight the Automatic Differentiation (AD) purposes. A numerical example proves the interest of such techniques for a generic and efficient implementation of sensitivity computations. To cite this article: I. Charpentier, C. R. Mecanique 336 (2008).

  8. Computational methods in metabolic engineering for strain design.

    Science.gov (United States)

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  10. Computational methods for coupling microstructural and micromechanical materials response simulations

    Energy Technology Data Exchange (ETDEWEB)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  11. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    Energy Technology Data Exchange (ETDEWEB)

    Noble, Charles R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Anderson, Andrew T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barton, Nathan R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bramwell, Jamie A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Capps, Arlie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chang, Michael H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chou, Jin J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dawson, David M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Diana, Emily R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, Timothy A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Faux, Douglas R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fisher, Aaron C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Heinz, Ines [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kanarska, Yuliya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Khairallah, Saad A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Liu, Benjamin T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Margraf, Jon D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nichols, Albert L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Puso, Michael A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reus, James F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Robinson, Peter B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shestakov, Alek I. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Solberg, Jerome M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Taller, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tsuji, Paul H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Christopher A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Jeremy L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-23

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  12. Simulating Small-Scale Experiments of In-Tunnel Airblast Using STUN and ALE3D

    Energy Technology Data Exchange (ETDEWEB)

    Neuscamman, Stephanie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glenn, Lewis [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schebler, Gregory [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McMichael, Larry [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glascoe, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-09-12

    This report details continuing validation efforts for the Sphere and Tunnel (STUN) and ALE3D codes. STUN has been validated previously for blast propagation through tunnels using several sets of experimental data with varying charge sizes and tunnel configurations, including the MARVEL nuclear driven shock tube experiment (Glenn, 2001). The DHS-funded STUNTool version is compared to experimental data and the LLNL ALE3D hydrocode. In this particular study, we compare the performance of the STUN and ALE3D codes in modeling an in-tunnel airblast to experimental results obtained by Lunderman and Ohrt in a series of small-scale high explosive experiments (1997).

  13. Practical methods to improve the development of computational software

    International Nuclear Information System (INIS)

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-01-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  14. Viscous flow computations with the lattice-Boltzmann equation method

    Science.gov (United States)

    Yu, Dazhi

    2002-09-01

    The lattice Boltzmann equation (LBE) method is a kinetics-based approach for fluid flow computations, and it is amenable to parallel computing. Compared to the well-established Navier-Stokes (NS) approaches, critical issues remain with the LBE method, noticeably flexible spatial resolution, boundary treatments, and dispersion and relaxation time mode. Those issues are addressed in this dissertation with improved practice presented. At the formulation level, both the single-relaxation-time (SRT) and multiple-relaxation-time (MRT) models are analyzed. The SRT model involves no artificial parameters, with a constant relaxation time regulating the physical value of fluid viscosity. The MRT model allows different relaxation time scales for different variables. Computational assessment shows that the MRT model has advantages over the SRT model in maintaining stability, reducing the oscillation, and improving the convergence rate in the computation. A multi-block method is developed for both the SRT and MRT model to facilitate flexible spatial resolutions according to the flow structures. The formulae for information exchange at the interface between coarse and fine grids are derived to ensure the mass and momentum conservation while maintaining the second-order accuracy. A customized time matching between coarse and fine grids is also presented to ensure smooth exchange information. Results show that the multi-block method can greatly increase the computational efficiency of the LBE method without losing the accuracy. Two methods of force evaluation in LBE are examined: one based on stress integration on the solid boundary and the other momentum exchange between fluid and solid. The momentum exchange method is found to be simpler to implement while the integration of stress requires evaluation of the detailed surface geometry and extrapolation of stress-related variables to the same surface. The momentum exchange method performs better overall. Improved treatments for

  15. Computing homography with RANSAC algorithm: a novel method of registration

    Science.gov (United States)

    Li, Xiaowei; Liu, Yue; Wang, Yongtian; Yan, Dayuan

    2005-02-01

    An AR (Augmented Reality) system can integrate computer-generated objects with the image sequences of real world scenes in either an off-line or a real-time way. Registration, or camera pose estimation, is one of the key techniques to determine its performance. The registration methods can be classified as model-based and move-matching. The former approach can accomplish relatively accurate registration results, but it requires the precise model of the scene, which is hard to be obtained. The latter approach carries out registration by computing the ego-motion of the camera. Because it does not require the prior-knowledge of the scene, its registration results sometimes turn out to be less accurate. When the model defined is as simple as a plane, a mixed method is introduced to take advantages of the virtues of the two methods mentioned above. Although unexpected objects often occlude this plane in an AR system, one can still try to detect corresponding points with a contract-expand method, while this will import erroneous correspondences. Computing homography with RANSAC algorithm is used to overcome such shortcomings. Using the robustly estimated homography resulted from RANSAC, the camera projective matrix can be recovered and thus registration is accomplished even when the markers are lost in the scene.

  16. Applications of meshless methods for damage computations with finite strains

    Science.gov (United States)

    Pan, Xiaofei; Yuan, Huang

    2009-06-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis.

  17. Characterization of Meta-Materials Using Computational Electromagnetic Methods

    Science.gov (United States)

    Deshpande, Manohar; Shin, Joon

    2005-01-01

    An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

  18. NATO Advanced Study Institute on Methods in Computational Molecular Physics

    CERN Document Server

    Diercksen, Geerd

    1992-01-01

    This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...

  19. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  20. Improved Method of Blind Speech Separation with Low Computational Complexity

    Directory of Open Access Journals (Sweden)

    Kazunobu Kondo

    2011-01-01

    a frame-wise spectral soft mask method based on an interchannel power ratio of tentative separated signals in the frequency domain. The soft mask cancels the transfer function between sources and separated signals. A theoretical analysis of selection criteria and the soft mask is given. Performance and effectiveness are evaluated via source separation simulations and a computational estimate, and experimental results show the significantly improved performance of the proposed method. The segmental signal-to-noise ratio achieves 7 [dB] and 3 [dB], and the cepstral distortion achieves 1 [dB] and 2.5 [dB], in anechoic and reverberant conditions, respectively. Moreover, computational complexity is reduced by more than 80% compared with unmodified FDICA.

  1. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  2. An Adaptive Reordered Method for Computing PageRank

    Directory of Open Access Journals (Sweden)

    Yi-Ming Bu

    2013-01-01

    Full Text Available We propose an adaptive reordered method to deal with the PageRank problem. It has been shown that one can reorder the hyperlink matrix of PageRank problem to calculate a reduced system and get the full PageRank vector through forward substitutions. This method can provide a speedup for calculating the PageRank vector. We observe that in the existing reordered method, the cost of the recursively reordering procedure could offset the computational reduction brought by minimizing the dimension of linear system. With this observation, we introduce an adaptive reordered method to accelerate the total calculation, in which we terminate the reordering procedure appropriately instead of reordering to the end. Numerical experiments show the effectiveness of this adaptive reordered method.

  3. Computer methods in radiation protection as viewed by a user

    CERN Document Server

    Stevenson, G R

    1980-01-01

    Five elements in computer methods as applied to radiation protection are identified and discussed: the user, from the person who never touches a keyboard to the person who has never made a fluence measurement; the problem, analytical, iterative or Monte Carlo; the objective, for multiple engineering runs, for single in-depth studies or for program development; the interface, or how to get the data in and the answer out; the solution, or is this really the right answer?. (0 refs).

  4. A fast, physically based method for mixing computations

    Science.gov (United States)

    Meunier, Patrice; Villermaux, Emmanuel

    2008-11-01

    We introduce a new numerical method for the study of diffusing scalar filaments in a 2D advection field. The position of the advected filament is computed kinematically, and the associated convection-diffusion problem is solved using the computed local stretching rate, assuming that the diffusing filament thickness is smaller than its local radius of curvature. This assumption reduces the numerical problem to the computation of a single variable along the filament, thus making the method extremely fast and applicable to any Peclet number. This method is then used for the mixing of a scalar in the chaotic regime of a Sine Flow, for which we relate the global quantities (spectra, concentration PDF) to the distributed stretching of the convoluted filament. The numerical results indicate that the PDF of the filament elongation is log-normal, a signature of random multiplicative processes. This property leads to exact analytical predictions for the spectrum of the field and for the PDF of the scalar concentration, in good agreement with the numerical results. These are thought to be generic of the chaotic mixing of scalars in the Batchelor regime.

  5. Computational methods for planning and evaluating geothermal energy projects

    International Nuclear Information System (INIS)

    Goumas, M.G.; Lygerou, V.A.; Papayannakis, L.E.

    1999-01-01

    In planning, designing and evaluating a geothermal energy project, a number of technical, economic, social and environmental parameters should be considered. The use of computational methods provides a rigorous analysis improving the decision-making process. This article demonstrates the application of decision-making methods developed in operational research for the optimum exploitation of geothermal resources. Two characteristic problems are considered: (1) the economic evaluation of a geothermal energy project under uncertain conditions using a stochastic analysis approach and (2) the evaluation of alternative exploitation schemes for optimum development of a low enthalpy geothermal field using a multicriteria decision-making procedure. (Author)

  6. Splitting method for computing coupled hydrodynamic and structural response

    International Nuclear Information System (INIS)

    Ash, J.E.

    1977-01-01

    A numerical method is developed for application to unsteady fluid dynamics problems, in particular to the mechanics following a sudden release of high energy. Solution of the initial compressible flow phase provides input to a power-series method for the incompressible fluid motions. The system is split into spatial and time domains leading to the convergent computation of a sequence of elliptic equations. Two sample problems are solved, the first involving an underwater explosion and the second the response of a nuclear reactor containment shell structure to a hypothetical core accident. The solutions are correlated with experimental data

  7. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  8. A scalable method for computing quadruplet wave-wave interactions

    Science.gov (United States)

    Van Vledder, Gerbrant

    2017-04-01

    Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.

  9. Mathematical modellings and computational methods for structural analysis of LMFBR's

    International Nuclear Information System (INIS)

    Liu, W.K.; Lam, D.

    1983-01-01

    In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)

  10. Multiscale methods in turbulent combustion: strategies and computational challenges

    International Nuclear Information System (INIS)

    Echekki, Tarek

    2009-01-01

    A principal challenge in modeling turbulent combustion flows is associated with their complex, multiscale nature. Traditional paradigms in the modeling of these flows have attempted to address this nature through different strategies, including exploiting the separation of turbulence and combustion scales and a reduced description of the composition space. The resulting moment-based methods often yield reasonable predictions of flow and reactive scalars' statistics under certain conditions. However, these methods must constantly evolve to address combustion at different regimes, modes or with dominant chemistries. In recent years, alternative multiscale strategies have emerged, which although in part inspired by the traditional approaches, also draw upon basic tools from computational science, applied mathematics and the increasing availability of powerful computational resources. This review presents a general overview of different strategies adopted for multiscale solutions of turbulent combustion flows. Within these strategies, some specific models are discussed or outlined to illustrate their capabilities and underlying assumptions. These strategies may be classified under four different classes, including (i) closure models for atomistic processes, (ii) multigrid and multiresolution strategies, (iii) flame-embedding strategies and (iv) hybrid large-eddy simulation-low-dimensional strategies. A combination of these strategies and models can potentially represent a robust alternative strategy to moment-based models; but a significant challenge remains in the development of computational frameworks for these approaches as well as their underlying theories. (topical review)

  11. Computational Intelligence Methods for Identifying Voltage Sag in Smart Grid

    Directory of Open Access Journals (Sweden)

    Turgay Yalcin

    2017-05-01

    Full Text Available In recent years pattern recognition of power quality (PQ disturbances in smart grids has developed into crucial topic for system equipments and end-users. Undoubtedly analyzing the PQ disturbances develop and maintain smart grids effectiveness. Voltage sags are the most common events that affect power quality. These faults are also the most costly. This paper represents performance comparisons of different computer intelligence methods for voltage sag identification. PQube Analyzer which is installed in Ondokuz Mayis University Computer Laboratory for collecting real time disturbances data for each three phases in order to test for proposed algorithms. Firstly, we used Hilbert Huang Transform to genarate Instantaneous Amplitude (IA feature signal. Then Characteristic features are attained from IA. The 4 features, mean, standard deviation, skewness, kurtosis of IA are calculated. Support Vector Machines (SVMs and C4.5 Decision Tree methods are conducted for classification of the disturbance. Secondly we used Fishers Discriminant Ratio for selecting statistical features such as mean, standard deviation, skewness and kurtosis of the normal and voltage sag signals for this part K Means Clustering Method were performed for classification of the disturbance. Consecuently, SVMs, C4.5 Decision Tree and K Means Clustering Methods were performed also their achievements were matched for error rates and CPU timing

  12. Computational methods of the Advanced Fluid Dynamics Model

    International Nuclear Information System (INIS)

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development

  13. Parallel fast multipole boundary element method applied to computational homogenization

    Science.gov (United States)

    Ptaszny, Jacek

    2018-01-01

    In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.

  14. PREFACE: Theory, Modelling and Computational methods for Semiconductors

    Science.gov (United States)

    Migliorato, Max; Probert, Matt

    2010-04-01

    These conference proceedings contain the written papers of the contributions presented at the 2nd International Conference on: Theory, Modelling and Computational methods for Semiconductors. The conference was held at the St Williams College, York, UK on 13th-15th Jan 2010. The previous conference in this series took place in 2008 at the University of Manchester, UK. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in Semiconductor science and technology, where there is a substantial potential for time saving in R&D. The development of high speed computer architectures is finally allowing the routine use of accurate methods for calculating the structural, thermodynamic, vibrational and electronic properties of semiconductors and their heterostructures. This workshop ran for three days, with the objective of bringing together UK and international leading experts in the field of theory of group IV, III-V and II-VI semiconductors together with postdocs and students in the early stages of their careers. The first day focused on providing an introduction and overview of this vast field, aimed particularly at students at this influential point in their careers. We would like to thank all participants for their contribution to the conference programme and these proceedings. We would also like to acknowledge the financial support from the Institute of Physics (Computational Physics group and Semiconductor Physics group), the UK Car-Parrinello Consortium, Accelrys (distributors of Materials Studio) and Quantumwise (distributors of Atomistix). The Editors Acknowledgements Conference Organising Committee: Dr Matt Probert (University of York) and Dr Max Migliorato (University of Manchester) Programme Committee: Dr Marco Califano (University of Leeds), Dr Jacob Gavartin (Accelrys Ltd, Cambridge), Dr Stanko Tomic (STFC Daresbury Laboratory), Dr Gabi Slavcheva (Imperial College London) Proceedings edited and compiled by Dr

  15. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without c ontrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  16. The Piecewise Cubic Method (PCM) for computational fluid dynamics

    Science.gov (United States)

    Lee, Dongwook; Faller, Hugues; Reyes, Adam

    2017-07-01

    We present a new high-order finite volume reconstruction method for hyperbolic conservation laws. The method is based on a piecewise cubic polynomial which provides its solutions a fifth-order accuracy in space. The spatially reconstructed solutions are evolved in time with a fourth-order accuracy by tracing the characteristics of the cubic polynomials. As a result, our temporal update scheme provides a significantly simpler and computationally more efficient approach in achieving fourth order accuracy in time, relative to the comparable fourth-order Runge-Kutta method. We demonstrate that the solutions of PCM converges at fifth-order in solving 1D smooth flows described by hyperbolic conservation laws. We test the new scheme on a range of numerical experiments, including both gas dynamics and magnetohydrodynamics applications in multiple spatial dimensions.

  17. A discrete ordinate response matrix method for massively parallel computers

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1991-01-01

    A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by assigning each spatial node to a processor. The algorithm is accelerated effectively by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red/black iterations. The method has been implemented on a 16k Connection Machine-2, and S 8 and S 16 solutions have been obtained for fixed-source benchmark problems in X--Y geometry

  18. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  19. Modeling methods for merging computational and experimental aerodynamic pressure data

    Science.gov (United States)

    Haderlie, Jacob C.

    This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT

  20. Radiation Transport Computation in Stochastic Media: Method and Application

    Science.gov (United States)

    Liang, Chao

    Stochastic media, characterized by the stochastic distribution of inclusions in a background medium, are typical radiation transport media encountered in natural or engineering systems. In the community of radiation transport computation, there is always a demand of accurate and efficient methods that can account for the nature of the stochastic distribution. In this dissertation, we focus on methodology development for the radiation transport computation that is applied to neutronic analyses of nuclear reactor designs characterized by the stochastic distribution of particle fuel. Reactor concepts with the employment of a fuel design consisting of a random heterogeneous mixture of fissile material and non-fissile moderator are constantly proposed. Key physical quantities such as core criticality and power distribution, reactivity control design parameters, depletion and fuel burn-up need to be carefully evaluated. In order to meet these practical requirements, we first need to develop accurate and fast computational methods that can effectively account for the stochastic nature of double heterogeneity configuration. A Monte Carlo based method called Chord Length Sampling (CLS) method is considered to be a promising method for analyzing those TRISO-type fueled reactors. Although the CLS method has been proposed for more than two decades and much research has been conducted to enhance its applicability, further efforts are still needed to address some key research gaps that exist for the CLS method. (1) There is a general lack of thorough investigation of the factors that give rise to the inaccuracy of the CLS method found by many researchers. The accuracy of the CLS method depends on the optical and geometric properties of the system. In some specific scenarios, considerable inaccuracies have been reported. However, no research has been providing a clear interpretation of the reasons responsible for the inaccuracy in the reported scenarios. Furthermore, no any

  1. A new computational method for reactive power market clearing

    International Nuclear Information System (INIS)

    Zhang, T.; Elkasrawy, A.; Venkatesh, B.

    2009-01-01

    After deregulation of electricity markets, ancillary services such as reactive power supply are priced separately. However, unlike real power supply, procedures for costing and pricing reactive power supply are still evolving and spot markets for reactive power do not exist as of now. Further, traditional formulations proposed for clearing reactive power markets use a non-linear mixed integer programming formulation that are difficult to solve. This paper proposes a new reactive power supply market clearing scheme. Novelty of this formulation lies in the pricing scheme that rewards transformers for tap shifting while participating in this market. The proposed model is a non-linear mixed integer challenge. A significant portion of the manuscript is devoted towards the development of a new successive mixed integer linear programming (MILP) technique to solve this formulation. The successive MILP method is computationally robust and fast. The IEEE 6-bus and 300-bus systems are used to test the proposed method. These tests serve to demonstrate computational speed and rigor of the proposed method. (author)

  2. Methodics of computing the results of monitoring the exploratory gallery

    Directory of Open Access Journals (Sweden)

    Krúpa Víazoslav

    2000-09-01

    Full Text Available At building site of motorway tunnel Višòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest – Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Višòové – Dubná skala motorway tunnel.

  3. COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...

  4. Numerical methods and computers used in elastohydrodynamic lubrication

    Science.gov (United States)

    Hamrock, B. J.; Tripp, J. H.

    1982-01-01

    Some of the methods of obtaining approximate numerical solutions to boundary value problems that arise in elastohydrodynamic lubrication are reviewed. The highlights of four general approaches (direct, inverse, quasi-inverse, and Newton-Raphson) are sketched. Advantages and disadvantages of these approaches are presented along with a flow chart showing some of the details of each. The basic question of numerical stability of the elastohydrodynamic lubrication solutions, especially in the pressure spike region, is considered. Computers used to solve this important class of lubrication problems are briefly described, with emphasis on supercomputers.

  5. Description of a method for computing fluid-structure interaction

    International Nuclear Information System (INIS)

    Gantenbein, F.

    1982-02-01

    A general formulation allowing computation of structure vibrations in a dense fluid is described. It is based on fluid modelisation by fluid finite elements. For each fluid node are associated two variables: the pressure p and a variable π defined as p=d 2 π/dt 2 . Coupling between structure and fluid is introduced by surface elements. This method is easy to introduce in a general finite element code. Validation was obtained by analytical calculus and tests. It is widely used for vibrational and seismic studies of pipes and internals of nuclear reactors some applications are presented [fr

  6. Compressive sampling in computed tomography: Method and application

    International Nuclear Information System (INIS)

    Hu, Zhanli; Liang, Dong; Xia, Dan; Zheng, Hairong

    2014-01-01

    Since Donoho and Candes et al. published their groundbreaking work on compressive sampling or compressive sensing (CS), CS theory has attracted a lot of attention and become a hot topic, especially in biomedical imaging. Specifically, some CS based methods have been developed to enable accurate reconstruction from sparse data in computed tomography (CT) imaging. In this paper, we will review the progress in CS based CT from aspects of three fundamental requirements of CS: sparse representation, incoherent sampling and reconstruction algorithm. In addition, some potential applications of compressive sampling in CT are introduced

  7. A hybrid method for the parallel computation of Green's functions

    International Nuclear Information System (INIS)

    Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.

  8. Fluid history computation methods for reactor safeguards problems using MNODE computer program

    International Nuclear Information System (INIS)

    Huang, Y.S.; Savery, C.W.

    1976-10-01

    A method for predicting the pressure-temperature histories of air, water liquid, and vapor flowing in a zoned containment as a result of high energy pipe rupture is described. The computer code, MNODE, has been developed for 12 connected control volumes and 24 inertia flow paths. Predictions by the code are compared with the results of an analytical gas dynamic problem, semiscale blowdown experiments, full scale MARVIKEN test results, Battelle-Frankfurt model PWR containment test data. The MNODE solutions to NRC/AEC subcompartment benchmark problems are also compared with results predicted by other computer codes such as RELAP-3, FLASH-2, CONTEMPT-PS. The analytical consideration is consistent with Section 6.2.1.2 of the Standard Format (Rev. 2) issued by U.S. Nuclear Regulatory Commission in September 1975

  9. Structural characterisation of semiconductors by computer methods of image analysis

    Science.gov (United States)

    Hernández-Fenollosa, M. A.; Cuesta-Frau, D.; Damonte, L. C.; Satorre Aznar, M. A.

    2005-08-01

    Analysis of microscopic images for automatic particle detection and extraction is a field of growing interest in many scientific fields such as biology, medicine and physics. In this paper we present a method to analyze microscopic images of semiconductors in order to, in a non-supervised way, obtain the main characteristics of the sample under test: growing regions, grain sizes, dendrite morphology and homogenization. In particular, nanocrystalline semiconductors with dimension less than 100 nm represent a relatively new class of materials. Their short-range structures are essentially the same as bulk semiconductors but their optical and electronic properties are dramatically different. The images are obtained by scanning electron microscopy (SEM) and processed by the computer methods presented. Traditionally these tasks have been performed manually, which is time-consuming and subjective in contrast to our computer analysis. The images acquired are first pre-processed in order to improve the signal-to-noise ratio and therefore the detection rate. Images are filtered by a weighted-median filter, and contrast is enhanced using histogram equalization. Then, images are thresholded using a binarization algorithm in such a way growing regions will be segmented. This segmentation is based on the different grey levels due to different sample height of the growing areas. Next, resulting image is further processed to eliminate the resulting holes and spots of the previous stage, and this image will be used to compute the percentage of such growing areas. Finally, using pattern recognition techniques (contour following and raster to vector transformation), single crystals are extracted to obtain their characteristics.

  10. Data graphing methods, articles of manufacture, and computing devices

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak Chung; Mackey, Patrick S.; Cook, Kristin A.; Foote, Harlan P.; Whiting, Mark A.

    2016-12-13

    Data graphing methods, articles of manufacture, and computing devices are described. In one aspect, a method includes accessing a data set, displaying a graphical representation including data of the data set which is arranged according to a first of different hierarchical levels, wherein the first hierarchical level represents the data at a first of a plurality of different resolutions which respectively correspond to respective ones of the hierarchical levels, selecting a portion of the graphical representation wherein the data of the portion is arranged according to the first hierarchical level at the first resolution, modifying the graphical representation by arranging the data of the portion according to a second of the hierarchal levels at a second of the resolutions, and after the modifying, displaying the graphical representation wherein the data of the portion is arranged according to the second hierarchal level at the second resolution.

  11. Matrix element method for high performance computing platforms

    Science.gov (United States)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  12. Reduction of atmospheric disturbances in PSInSAR measure technique based on ENVISAT ASAR data for Erta Ale Ridge

    Science.gov (United States)

    Kopeć, Anna

    2018-01-01

    The interferometric synthetic aperture radar (InSAR) is becoming more and more popular to investigate surface deformation, associated with volcanism, earthquakes, landslides, and post-mining surface subsidence. The measuring accuracy depends on many factors: surface, time and geometric decorrelation, orbit errors, however the largest challenges are the tropospheric delays. The spatial and temporal variations in temperature, pressure, and relative humidity are responsible for tropospheric delays. So far, many methods have been developed, but researchers are still searching for the one, that will allow to correct interferograms consistently in different regions and times. The article focuses on examining the methods based on empirical phase-based methods, spectrometer measurements and weather model. These methods were applied to the ENVISAT ASAR data for the Erta Ale Ridge in the Afar Depression, East Africa

  13. Computational Method for Global Sensitivity Analysis of Reactor Neutronic Parameters

    Directory of Open Access Journals (Sweden)

    Bolade A. Adetula

    2012-01-01

    Full Text Available The variance-based global sensitivity analysis technique is robust, has a wide range of applicability, and provides accurate sensitivity information for most models. However, it requires input variables to be statistically independent. A modification to this technique that allows one to deal with input variables that are blockwise correlated and normally distributed is presented. The focus of this study is the application of the modified global sensitivity analysis technique to calculations of reactor parameters that are dependent on groupwise neutron cross-sections. The main effort in this work is in establishing a method for a practical numerical calculation of the global sensitivity indices. The implementation of the method involves the calculation of multidimensional integrals, which can be prohibitively expensive to compute. Numerical techniques specifically suited to the evaluation of multidimensional integrals, namely, Monte Carlo and sparse grids methods, are used, and their efficiency is compared. The method is illustrated and tested on a two-group cross-section dependent problem. In all the cases considered, the results obtained with sparse grids achieved much better accuracy while using a significantly smaller number of samples. This aspect is addressed in a ministudy, and a preliminary explanation of the results obtained is given.

  14. Computational methods for the one-particle Green's function

    International Nuclear Information System (INIS)

    Niessen, W. von; Schirmer, J.; Cederbaum, L.S.

    1984-01-01

    A review is given of computational methods for the one-particle Green's function of finite electronic systems. Two distinct approximation schemes are considered which are based on the diagrammatic perturbation expansions of the Green's function G and of the self-energy part Σ related to G via the Dyson equation. The first scheme referred to as the extended two-particle hole Tamm-Dancoff approximation (extended 2ph-TDA) is derived as an infinite partial summation for Σ and G being complete through third-order in the electronic repulsion. The essential numerical problem is the diagonalization of a symmetric matrix defined in the space of a special class of ionic configurations. The structure of this matrix allows for an efficient two-step diagonalization procedure where a special diagonalization algorithm for matrices with an arrow-type structure is employed. The second approximation scheme discussed here is the outer-valence Green's function method (OVGF) based on a finite perturbation expansion of the self-energy part (it is exact to third order in the electronic repulsion and is supplemented by a geometrical approximation to higher orders). The OVGF is much simpler than the extended 2ph-TDA, since no matrices are to be diagonalized. The range of applicability of the OVGF is, however, restricted. For both approximation schemes spin-free formulations of the working equations are presented. Aspects of an optimal implementation in computer codes are discussed. The numerical performance of the methods is demonstrated by application to the ionization spectra and electron affinities of selected molecules. (orig.)

  15. Diversity and abundance of communities of birds associated to forests semideciduos and pine encino of the National Park Viñales

    Directory of Open Access Journals (Sweden)

    Sael Hanoi Pérez Báez

    2016-06-01

    Full Text Available The present work was carried out in the months of February to April 2009 in the forest semideciduo of the path "Marvel of Viñales" and the formation pine-encino of the Valley Ancón of the National Park Viñales and it pursued as main objective to evaluate the diversity and abundance of the communities of birds and its association grade with both formations. The method of circular parcels of fixed radio was used in 30 points of counts separated to 150 m one of other and for the study of vegetation he/she took like base the methodology proposed by James and Shugart (1970 and Noon (1981 with adaptations, he/she took state fenológico of the vegetable species and they measured different variables of the formation boscosa. They were detected a total of 44 species of birds for the semidesiduo and 42 in Ancón. He/she was association between several species of birds and vegetables of the formations in study, appreciating you increment of S with the Relative Abundance and the decrease of the height of the vegetation with the vegetable density. The communities of birds of the formation of forest semideciduo of the path "Marvels of Viñales" and of the forest of pine encino of "Valley Ancón" presented similar figures of wealth, diversity and equitatividad but they sustained differences in composition and it structures. In both study formations numeric dominancias of Turdus plumbeus and Vireo altiloquus registered and the difference was given by the abundance of Teretistris fernandinae in "Marvels of Viñales" and Tiaris canorus in Valley Ancón. The relationship was demonstrated between ornitocenosis and fitocenosis and several species of birds they associated in more measure to rosy Clusea, Callophilum antillanun, Cuban Quercus, Matayba oppositifolia and Cordovan leathers.

  16. Bound-Preserving Reconstruction of Tensor Quantities for Remap in ALE Fluid Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Klima, Matej [Czech Technical Univ. in Prague, Praha (Czech Republic); Kucharik, MIlan [Czech Technical Univ. in Prague, Praha (Czech Republic); Shashkov, Mikhail Jurievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Velechovsky, Jan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    We analyze several new and existing approaches for limiting tensor quantities in the context of deviatoric stress remapping in an ALE numerical simulation of elastic flow. Remapping and limiting of the tensor component-by-component is shown to violate radial symmetry of derived variables such as elastic energy or force. Therefore, we have extended the symmetry-preserving Vector Image Polygon algorithm, originally designed for limiting vector variables. This limiter constrains the vector (in our case a vector of independent tensor components) within the convex hull formed by the vectors from surrounding cells – an equivalent of the discrete maximum principle in scalar variables. We compare this method with a limiter designed specifically for deviatoric stress limiting which aims to constrain the J2 invariant that is proportional to the specific elastic energy and scale the tensor accordingly. We also propose a method which involves remapping and limiting the J2 invariant independently using known scalar techniques. The deviatoric stress tensor is then scaled to match this remapped invariant, which guarantees conservation in terms of elastic energy.

  17. Computational methods for the verification of adaptive control systems

    Science.gov (United States)

    Prasanth, Ravi K.; Boskovic, Jovan; Mehra, Raman K.

    2004-08-01

    Intelligent and adaptive control systems will significantly challenge current verification and validation (V&V) processes, tools, and methods for flight certification. Although traditional certification practices have produced safe and reliable flight systems, they will not be cost effective for next-generation autonomous unmanned air vehicles (UAVs) due to inherent size and complexity increases from added functionality. Affordable V&V of intelligent control systems is by far the most important challenge in the development of UAVs faced by both commercial and military aerospace industry in the United States. This paper presents a formal modeling framework for a class of adaptive control systems and an associated computational scheme. The class of systems considered include neural network-based flight control systems and vehicle health management systems. This class of systems and indeed all adaptive systems are hybrid systems whose continuum dynamics is nonlinear. Our computational procedure is iterative and each iteration has two sequential steps. The first step is to derive an approximating finite-state automaton whose behaviors contain the behaviors of the hybrid system. The second step is to check if the language accepted by the approximating automaton is empty (emptiness checking). The iterations are terminated if the language accepted is empty; otherwise, the approximation is refined and the iteration is continued. This procedure will never produce an "error-free" certificate when the actual system contains errors which is an important requirement in V&V of safety critical systems.

  18. Computational carbohydrate chemistry: what theoretical methods can tell us

    Science.gov (United States)

    Woods, Robert J.

    2014-01-01

    Computational methods have had a long history of application to carbohydrate systems and their development in this regard is discussed. The conformational analysis of carbohydrates differs in several ways from that of other biomolecules. Many glycans appear to exhibit numerous conformations coexisting in solution at room temperature and a conformational analysis of a carbohydrate must address both spatial and temporal properties. When solution nuclear magnetic resonance data are used for comparison, the simulation must give rise to ensemble-averaged properties. In contrast, when comparing to experimental data obtained from crystal structures a simulation of a crystal lattice, rather than of an isolated molecule, is appropriate. Molecular dynamics simulations are well suited for such condensed phase modeling. Interactions between carbohydrates and other biological macromolecules are also amenable to computational approaches. Having obtained a three-dimensional structure of the receptor protein, it is possible to model with accuracy the conformation of the carbohydrate in the complex. An example of the application of free energy perturbation simulations to the prediction of carbohydrate-protein binding energies is presented. PMID:9579797

  19. Computational and analytical methods in nonlinear fluid dynamics

    Science.gov (United States)

    Walker, James

    1993-09-01

    The central focus of the program was on the application and development of modern analytical and computational methods to the solution of nonlinear problems in fluid dynamics and reactive gas dynamics. The research was carried out within the Division of Engineering Mathematics in the Department of Mechanical Engineering and Mechanics and principally involved Professors P.A. Blythe, E. Varley and J.D.A. Walker. In addition. the program involved various international collaborations. Professor Blythe completed work on reactive gas dynamics with Professor D. Crighton FRS of Cambridge University in the United Kingdom. Professor Walker and his students carried out joint work with Professor F.T. Smith, of University College London, on various problems in unsteady flow and turbulent boundary layers.

  20. Statistical physics and computational methods for evolutionary game theory

    CERN Document Server

    Javarone, Marco Alberto

    2018-01-01

    This book presents an introduction to Evolutionary Game Theory (EGT) which is an emerging field in the area of complex systems attracting the attention of researchers from disparate scientific communities. EGT allows one to represent and study several complex phenomena, such as the emergence of cooperation in social systems, the role of conformity in shaping the equilibrium of a population, and the dynamics in biological and ecological systems. Since EGT models belong to the area of complex systems, statistical physics constitutes a fundamental ingredient for investigating their behavior. At the same time, the complexity of some EGT models, such as those realized by means of agent-based methods, often require the implementation of numerical simulations. Therefore, beyond providing an introduction to EGT, this book gives a brief overview of the main statistical physics tools (such as phase transitions and the Ising model) and computational strategies for simulating evolutionary games (such as Monte Carlo algor...

  1. Conference on Boundary and Interior Layers : Computational and Asymptotic Methods

    CERN Document Server

    Stynes, Martin; Zhang, Zhimin

    2017-01-01

    This volume collects papers associated with lectures that were presented at the BAIL 2016 conference, which was held from 14 to 19 August 2016 at Beijing Computational Science Research Center and Tsinghua University in Beijing, China. It showcases the variety and quality of current research into numerical and asymptotic methods for theoretical and practical problems whose solutions involve layer phenomena. The BAIL (Boundary And Interior Layers) conferences, held usually in even-numbered years, bring together mathematicians and engineers/physicists whose research involves layer phenomena, with the aim of promoting interaction between these often-separate disciplines. These layers appear as solutions of singularly perturbed differential equations of various types, and are common in physical problems, most notably in fluid dynamics. This book is of interest for current researchers from mathematics, engineering and physics whose work involves the accurate app roximation of solutions of singularly perturbed diffe...

  2. Computer Aided Flowsheet Design using Group Contribution Methods

    DEFF Research Database (Denmark)

    Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul

    In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent...... different unit operations in the system. Feasible flowsheet configurations are generated using efficient combinatorial algorithms and the performance of each candidate flowsheet is evaluated using a set of flowsheet properties. A systematic notation system called SFILES is used to store the structural...... information of each flowsheet to minimize the computational load and information storage. The design variables for the selected flowsheet(s) are identified through a reverse simulation approach and are used as initial estimates for rigorous simulation to verify the feasibility and performance of the design....

  3. Search systems and computer-implemented search methods

    Science.gov (United States)

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  4. Methods and computer readable medium for improved radiotherapy dosimetry planning

    Science.gov (United States)

    Wessol, Daniel E.; Frandsen, Michael W.; Wheeler, Floyd J.; Nigg, David W.

    2005-11-15

    Methods and computer readable media are disclosed for ultimately developing a dosimetry plan for a treatment volume irradiated during radiation therapy with a radiation source concentrated internally within a patient or incident from an external beam. The dosimetry plan is available in near "real-time" because of the novel geometric model construction of the treatment volume which in turn allows for rapid calculations to be performed for simulated movements of particles along particle tracks therethrough. The particles are exemplary representations of alpha, beta or gamma emissions emanating from an internal radiation source during various radiotherapies, such as brachytherapy or targeted radionuclide therapy, or they are exemplary representations of high-energy photons, electrons, protons or other ionizing particles incident on the treatment volume from an external source. In a preferred embodiment, a medical image of a treatment volume irradiated during radiotherapy having a plurality of pixels of information is obtained.

  5. Search systems and computer-implemented search methods

    Energy Technology Data Exchange (ETDEWEB)

    Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2017-03-07

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  6. Computation of Hemagglutinin Free Energy Difference by the Confinement Method

    Science.gov (United States)

    2017-01-01

    Hemagglutinin (HA) mediates membrane fusion, a crucial step during influenza virus cell entry. How many HAs are needed for this process is still subject to debate. To aid in this discussion, the confinement free energy method was used to calculate the conformational free energy difference between the extended intermediate and postfusion state of HA. Special care was taken to comply with the general guidelines for free energy calculations, thereby obtaining convergence and demonstrating reliability of the results. The energy that one HA trimer contributes to fusion was found to be 34.2 ± 3.4kBT, similar to the known contributions from other fusion proteins. Although computationally expensive, the technique used is a promising tool for the further energetic characterization of fusion protein mechanisms. Knowledge of the energetic contributions per protein, and of conserved residues that are crucial for fusion, aids in the development of fusion inhibitors for antiviral drugs. PMID:29151344

  7. Software Defects, Scientific Computation and the Scientific Method

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    Computation has rapidly grown in the last 50 years so that in many scientific areas it is the dominant partner in the practice of science. Unfortunately, unlike the experimental sciences, it does not adhere well to the principles of the scientific method as espoused by, for example, the philosopher Karl Popper. Such principles are built around the notions of deniability and reproducibility. Although much research effort has been spent on measuring the density of software defects, much less has been spent on the more difficult problem of measuring their effect on the output of a program. This talk explores these issues with numerous examples suggesting how this situation might be improved to match the demands of modern science. Finally it develops a theoretical model based on an amalgam of statistical mechanics and Hartley/Shannon information theory which suggests that software systems have strong implementation independent behaviour and supports the widely observed phenomenon that defects clust...

  8. Computational and Experimental Methods to Decipher the Epigenetic Code

    Directory of Open Access Journals (Sweden)

    Stefano ede Pretis

    2014-09-01

    Full Text Available A multi-layered set of epigenetic marks, including post-translational modifications of histones and methylation of DNA, is finely tuned to define the epigenetic state of chromatin in any given cell type under specific conditions. Recently, the knowledge about the combinations of epigenetic marks occurring in the genome of different cell types under various conditions is rapidly increasing. Computational methods were developed for the identification of these states, unraveling the combinatorial nature of epigenetic marks and their association to genomic functional elements and transcriptional states. Nevertheless, the precise rules defining the interplay between all these marks remain poorly characterized. In this perspective we review the current state of this research field, illustrating the power and the limitations of current approaches. Finally, we sketch future avenues of research illustrating how the adoption of specific experimental designs coupled with available experimental approaches could be critical for a significant progress in this area.

  9. Activation method for measuring the neutron spectra parameters. Computer software

    International Nuclear Information System (INIS)

    Efimov, B.V.; Ionov, V.S.; Konyaev, S.I.; Marin, S.V.

    2005-01-01

    The description of mathematical statement of a task for definition the spectral characteristics of neutron fields with use developed in RRC KI unified activation detectors (UKD) is resulted. The method of processing of results offered by authors activation measurements and calculation of the parameters used for an estimation of the neutron spectra characteristics is discussed. Features of processing of the experimental data received at measurements of activation with using UKD are considered. Activation detectors UKD contain a little bit specially the picked up isotopes giving at irradiation peaks scale of activity in the common spectrum scale of activity. Computing processing of results of the measurements is applied on definition of spectrum parameters for nuclear reactor installations with thermal and close to such power spectrum of neutrons. The example of the data processing, the measurements received at carrying out at RRC KI research reactor F-1 is resulted [ru

  10. Computational method for transmission eigenvalues for a spherically stratified medium.

    Science.gov (United States)

    Cheng, Xiaoliang; Yang, Jing

    2015-07-01

    We consider a computational method for the interior transmission eigenvalue problem that arises in acoustic and electromagnetic scattering. The transmission eigenvalues contain useful information about some physical properties, such as the index of refraction. Instead of the existence and estimation of the spectral property of the transmission eigenvalues, we focus on the numerical calculation, especially for spherically stratified media in R3. Due to the nonlinearity and the special structure of the interior transmission eigenvalue problem, there are not many numerical methods to date. First, we reduce the problem into a second-order ordinary differential equation. Then, we apply the Hermite finite element to the weak formulation of the equation. With proper rewriting of the matrix-vector form, we change the original nonlinear eigenvalue problem into a quadratic eigenvalue problem, which can be written as a linear system and solved by the eigs function in MATLAB. This numerical method is fast, effective, and can calculate as many transmission eigenvalues as needed at a time.

  11. Automatic heart positioning method in computed tomography scout images.

    Science.gov (United States)

    Li, Hong; Liu, Kaihua; Sun, Hang; Bao, Nan; Wang, Xu; Tian, Shi; Qi, Shouliang; Kang, Yan

    2014-01-01

    Computed tomography (CT) radiation dose can be reduced significantly by region of interest (ROI) CT scan. Automatically positioning the heart in CT scout images is an essential step to realize the ROI CT scan of the heart. This paper proposed a fully automatic heart positioning method in CT scout image, including the anteroposterior (A-P) scout image and lateral scout image. The key steps were to determine the feature points of the heart and obtaining part of the heart boundary on the A-P scout image, and then transform the part of the boundary into polar coordinate system and obtain the whole boundary of the heart using slant elliptic equation curve fitting. For heart positioning on the lateral image, the top and bottom boundary obtained from A-P image can be inherited. The proposed method was tested on a clinical routine dataset of 30 cases (30 A-P scout images and 30 lateral scout images). Experimental results show that 26 cases of the dataset have achieved a very good positioning result of the heart both in the A-P scout image and the lateral scout image. The method may be helpful for ROI CT scan of the heart.

  12. Emerging Computational Methods for the Rational Discovery of Allosteric Drugs.

    Science.gov (United States)

    Wagner, Jeffrey R; Lee, Christopher T; Durrant, Jacob D; Malmstrom, Robert D; Feher, Victoria A; Amaro, Rommie E

    2016-06-08

    Allosteric drug development holds promise for delivering medicines that are more selective and less toxic than those that target orthosteric sites. To date, the discovery of allosteric binding sites and lead compounds has been mostly serendipitous, achieved through high-throughput screening. Over the past decade, structural data has become more readily available for larger protein systems and more membrane protein classes (e.g., GPCRs and ion channels), which are common allosteric drug targets. In parallel, improved simulation methods now provide better atomistic understanding of the protein dynamics and cooperative motions that are critical to allosteric mechanisms. As a result of these advances, the field of predictive allosteric drug development is now on the cusp of a new era of rational structure-based computational methods. Here, we review algorithms that predict allosteric sites based on sequence data and molecular dynamics simulations, describe tools that assess the druggability of these pockets, and discuss how Markov state models and topology analyses provide insight into the relationship between protein dynamics and allosteric drug binding. In each section, we first provide an overview of the various method classes before describing relevant algorithms and software packages.

  13. Methods and computer codes for probabilistic sensitivity and uncertainty analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1985-01-01

    This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables

  14. Computation of rectangular source integral by rational parameter polynomial method

    International Nuclear Information System (INIS)

    Prabha, Hem

    2001-01-01

    Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively

  15. A computed microtomography method for understanding epiphyseal growth plate fusion

    Science.gov (United States)

    Staines, Katherine A.; Madi, Kamel; Javaheri, Behzad; Lee, Peter D.; Pitsillides, Andrew A.

    2017-12-01

    The epiphyseal growth plate is a developmental region responsible for linear bone growth, in which chondrocytes undertake a tightly regulated series of biological processes. Concomitant with the cessation of growth and sexual maturation, the human growth plate undergoes progressive narrowing, and ultimately disappears. Despite the crucial role of this growth plate fusion ‘bridging’ event, the precise mechanisms by which it is governed are complex and yet to be established. Progress is likely hindered by the current methods for growth plate visualisation; these are invasive and largely rely on histological procedures. Here we describe our non-invasive method utilising synchrotron x-ray computed microtomography for the examination of growth plate bridging, which ultimately leads to its closure coincident with termination of further longitudinal bone growth. We then apply this method to a dataset obtained from a benchtop microcomputed tomography scanner to highlight its potential for wide usage. Furthermore, we conduct finite element modelling at the micron-scale to reveal the effects of growth plate bridging on local tissue mechanics. Employment of these 3D analyses of growth plate bone bridging is likely to advance our understanding of the physiological mechanisms that control growth plate fusion.

  16. A comparison of computational methods for identifying virulence factors.

    Directory of Open Access Journals (Sweden)

    Lu-Lu Zheng

    Full Text Available Bacterial pathogens continue to threaten public health worldwide today. Identification of bacterial virulence factors can help to find novel drug/vaccine targets against pathogenicity. It can also help to reveal the mechanisms of the related diseases at the molecular level. With the explosive growth in protein sequences generated in the postgenomic age, it is highly desired to develop computational methods for rapidly and effectively identifying virulence factors according to their sequence information alone. In this study, based on the protein-protein interaction networks from the STRING database, a novel network-based method was proposed for identifying the virulence factors in the proteomes of UPEC 536, UPEC CFT073, P. aeruginosa PAO1, L. pneumophila Philadelphia 1, C. jejuni NCTC 11168 and M. tuberculosis H37Rv. Evaluated on the same benchmark datasets derived from the aforementioned species, the identification accuracies achieved by the network-based method were around 0.9, significantly higher than those by the sequence-based methods such as BLAST, feature selection and VirulentPred. Further analysis showed that the functional associations such as the gene neighborhood and co-occurrence were the primary associations between these virulence factors in the STRING database. The high success rates indicate that the network-based method is quite promising. The novel approach holds high potential for identifying virulence factors in many other various organisms as well because it can be easily extended to identify the virulence factors in many other bacterial species, as long as the relevant significant statistical data are available for them.

  17. Computational methods in decision-making, economics and finance

    CERN Document Server

    Rustem, Berc; Siokos, Stavros

    2002-01-01

    Computing has become essential for the modeling, analysis, and optimization of systems This book is devoted to algorithms, computational analysis, and decision models The chapters are organized in two parts optimization models of decisions and models of pricing and equilibria

  18. Brewhouse-Resident Microbiota Are Responsible for Multi-Stage Fermentation of American Coolship Ale

    Science.gov (United States)

    Bokulich, Nicholas A.; Bamforth, Charles W.; Mills, David A.

    2012-01-01

    American coolship ale (ACA) is a type of spontaneously fermented beer that employs production methods similar to traditional Belgian lambic. In spite of its growing popularity in the American craft-brewing sector, the fermentation microbiology of ACA has not been previously described, and thus the interface between production methodology and microbial community structure is unexplored. Using terminal restriction fragment length polymorphism (TRFLP), barcoded amplicon sequencing (BAS), quantitative PCR (qPCR) and culture-dependent analysis, ACA fermentations were shown to follow a consistent fermentation progression, initially dominated by Enterobacteriaceae and a range of oxidative yeasts in the first month, then ceding to Saccharomyces spp. and Lactobacillales for the following year. After one year of fermentation, Brettanomyces bruxellensis was the dominant yeast population (occasionally accompanied by minor populations of Candida spp., Pichia spp., and other yeasts) and Lactobacillales remained dominant, though various aerobic bacteria became more prevalent. This work demonstrates that ACA exhibits a conserved core microbial succession in absence of inoculation, supporting the role of a resident brewhouse microbiota. These findings establish this core microbial profile of spontaneous beer fermentations as a target for production control points and quality standards for these beers. PMID:22530036

  19. Mathematical and computational methods in nuclear physics. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Dehesa, J.S.; Gomez, J.M.G.; Polls, A.

    1984-01-01

    The present proceedings contain the talks given at the Sixth International Granada Workshop on ''Mathematical and Computational Methods in Nuclear Physics'', held in Granada (Spain), October 3rd-8th, 1983. The lectures covering various aspects of the many-body problem in nuclei, review present knowledge and include some unpublished material as well. Bohigas and Giannoni discuss the fluctuation properties of spectra of many-body systems by means of random matrix theories, and the attempts to search for quantum mechanical manifestations of classical chaotic motion. The role of spectral distributions (expressed as explicit functions of the microscopic matrix elements of the Hamiltonian) in the statistical spectroscopy of nuclear systems is analyzed by French. Zuker, after a brief review of the theoretical basis of the shell model, discusses a reformulation of the theory of effective interactions and gives a survey of the linked cluster theory. Goeke's lectures center on the mean-field methods, particularly TDHF, used in the investigation of the large-amplitude nuclear collective motion, pointing out both the successes and failures of the theory. In addition the present volume also contains the seminars on related topics.

  20. Global Seabed Materials and Habitats Mapped: The Computational Methods

    Science.gov (United States)

    Jenkins, C. J.

    2016-02-01

    What the seabed is made of has proven difficult to map on the scale of whole ocean-basins. Direct sampling and observation can be augmented with proxy-parameter methods such as acoustics. Both avenues are essential to obtain enough detail and coverage, and also to validate the mapping methods. We focus on the direct observations such as samplings, photo and video, probes, diver and sub reports, and surveyed features. These are often in word-descriptive form: over 85% of the records for site materials are in this form, whether as sample/view descriptions or classifications, or described parameters such as consolidation, color, odor, structures and components. Descriptions are absolutely necessary for unusual materials and for processes - in other words, for research. This project dbSEABED not only has the largest collection of seafloor materials data worldwide, but it uses advanced computing math to obtain the best possible coverages and detail. Included in those techniques are linguistic text analysis (e.g., Natural Language Processing, NLP), fuzzy set theory (FST), and machine learning (ML, e.g., Random Forest). These techniques allow efficient and accurate import of huge datasets, thereby optimizing the data that exists. They merge quantitative and qualitative types of data for rich parameter sets, and extrapolate where the data are sparse for best map production. The dbSEABED data resources are now very widely used worldwide in oceanographic research, environmental management, the geosciences, engineering and survey.

  1. Comparison of different additive manufacturing methods using computed tomography

    Directory of Open Access Journals (Sweden)

    Paras Shah

    2016-11-01

    Full Text Available Additive manufacturing (AM allows for fast fabrication of three dimensional objects with the use of considerably less resources, less energy consumption and shorter supply chain than would be the case in traditional manufacturing. AM has gained significance due to its cost effective method which boasts the ability to produce components with a previously unachievable level of geometric complexity in prototyping and end user industrial applications, such as aerospace, automotive and medical industries. However these processes currently lack reproducibility and repeatability with some ‘prints’ having a high probability of requiring rework or even scrapping due to out of specification or high porosity levels, leading to failure due to structural stresses. It is therefore imperative that robust quality systems be implemented such that the waste level of these processes can be significantly decreased. This study presents an artefact that is optimised for characterisation of form using computed tomography (CT with representative geometric dimensioning and tolerancing features and internal channels and structures comparable to cooling channels in heat exchangers. Furthermore the optimisation of the CT acquisition conditions for this artefact are presented in light of feature dimensions and form analysis. This paper investigates the accuracy and capability of CT measurements compared with reference measurements from coordinate measuring machine (CMM, as well as focus on the evaluation of different AM methods.

  2. Semi-coarsening multigrid methods for parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Jones, J.E.

    1996-12-31

    Standard multigrid methods are not well suited for problems with anisotropic coefficients which can occur, for example, on grids that are stretched to resolve a boundary layer. There are several different modifications of the standard multigrid algorithm that yield efficient methods for anisotropic problems. In the paper, we investigate the parallel performance of these multigrid algorithms. Multigrid algorithms which work well for anisotropic problems are based on line relaxation and/or semi-coarsening. In semi-coarsening multigrid algorithms a grid is coarsened in only one of the coordinate directions unlike standard or full-coarsening multigrid algorithms where a grid is coarsened in each of the coordinate directions. When both semi-coarsening and line relaxation are used, the resulting multigrid algorithm is robust and automatic in that it requires no knowledge of the nature of the anisotropy. This is the basic multigrid algorithm whose parallel performance we investigate in the paper. The algorithm is currently being implemented on an IBM SP2 and its performance is being analyzed. In addition to looking at the parallel performance of the basic semi-coarsening algorithm, we present algorithmic modifications with potentially better parallel efficiency. One modification reduces the amount of computational work done in relaxation at the expense of using multiple coarse grids. This modification is also being implemented with the aim of comparing its performance to that of the basic semi-coarsening algorithm.

  3. Non-unitary probabilistic quantum computing circuit and method

    Science.gov (United States)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  4. Nuclear power reactor analysis, methods, algorithms and computer programs

    International Nuclear Information System (INIS)

    Matausek, M.V

    1981-01-01

    Full text: For a developing country buying its first nuclear power plants from a foreign supplier, disregarding the type and scope of the contract, there is a certain number of activities which have to be performed by local stuff and domestic organizations. This particularly applies to the choice of the nuclear fuel cycle strategy and the choice of the type and size of the reactors, to bid parameters specification, bid evaluation and final safety analysis report evaluation, as well as to in-core fuel management activities. In the Nuclear Engineering Department of the Boris Kidric Institute of Nuclear Sciences (NET IBK) the continual work is going on, related to the following topics: cross section and resonance integral calculations, spectrum calculations, generation of group constants, lattice and cell problems, criticality and global power distribution search, fuel burnup analysis, in-core fuel management procedures, cost analysis and power plant economics, safety and accident analysis, shielding problems and environmental impact studies, etc. The present paper gives the details of the methods developed and the results achieved, with the particular emphasis on the NET IBK computer program package for the needs of planning, construction and operation of nuclear power plants. The main problems encountered so far were related to small working team, lack of large and powerful computers, absence of reliable basic nuclear data and shortage of experimental and empirical results for testing theoretical models. Some of these difficulties have been overcome thanks to bilateral and multilateral cooperation with developed countries, mostly through IAEA. It is the authors opinion, however, that mutual cooperation of developing countries, having similar problems and similar goals, could lead to significant results. Some activities of this kind are suggested and discussed. (author)

  5. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  6. Justification of computational methods to ensure information management systems

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Summary. Due to the diversity and complexity of organizational management tasks a large enterprise, the construction of an information management system requires the establishment of interconnected complexes of means, implementing the most efficient way collect, transfer, accumulation and processing of information necessary drivers handle different ranks in the governance process. The main trends of the construction of integrated logistics management information systems can be considered: the creation of integrated data processing systems by centralizing storage and processing of data arrays; organization of computer systems to realize the time-sharing; aggregate-block principle of the integrated logistics; Use a wide range of peripheral devices with the unification of information and hardware communication. Main attention is paid to the application of the system of research of complex technical support, in particular, the definition of quality criteria for the operation of technical complex, the development of information base analysis methods of management information systems and define the requirements for technical means, as well as methods of structural synthesis of the major subsystems of integrated logistics. Thus, the aim is to study on the basis of systematic approach of integrated logistics management information system and the development of a number of methods of analysis and synthesis of complex logistics that are suitable for use in the practice of engineering systems design. The objective function of the complex logistics management information systems is the task of gathering systems, transmission and processing of specified amounts of information in the regulated time intervals with the required degree of accuracy while minimizing the reduced costs for the establishment and operation of technical complex. Achieving the objective function of the complex logistics to carry out certain organization of interaction of information

  7. Use of digital computers for correction of gamma method and neutron-gamma method indications

    International Nuclear Information System (INIS)

    Lakhnyuk, V.M.

    1978-01-01

    The program for the NAIRI-S computer is described which is intended for accounting and elimination of the effect of by-processes when interpreting gamma and neutron-gamma logging indications. By means of slight corrections it is possible to use the program as a mathematical basis for logging diagram standardization by the method of multidimensional regressive analysis and estimation of rock reservoir properties

  8. Procesamiento de señales biomédicas mediante instrumento virtual desarrollado con matlab

    OpenAIRE

    Sánchez Márquez, Carlos

    2014-01-01

    El presente trabajo es una alternativa para el estudio y desarrollo de prototipos biomédicos y de instrumentación, mediante el uso de la plataforma de Matlab para el procesamiento de señales biomédicas reales. Las señales biomédicas son continuas en el tiempo y son de pequeña amplitud, del orden de los mV, que presentan ruido corporal, ruido del equipo, ruido del ambiente, sin dejar de contar con el ruido acoplado de la red de 60 Hz. En ese sentido, la adquisición de datos se puede hacer medi...

  9. A DETERMINAÇÃO DOS PRODUTOS AVANÇADOS DE GLICAÇÃO (AGES E DE LIPOXIDAÇÃO (ALES EM ALIMENTOS E EM SISTEMAS BIOLÓGICOS: AVANÇOS, DESAFIOS E PERSPECTIVAS

    Directory of Open Access Journals (Sweden)

    Júnia H. Porto Barbosa

    2016-06-01

    Full Text Available Advanced glycation (AGEs and lipoxidation (ALEs products are formed through specific condensation reactions between nucleophiles (amino groups of free amino acids or their residues in peptides, aminophospholipids or proteins and electrophiles (carbonyls of reducing sugars, oxidized lipids or others generating well-defined sets of covalent adducts. The ε-amino group of the lysine is the most reactive precursor in proteins and the primary target of carbohydrate attacks. AGEs/ALEs accumulation has consequences in the development of vascular, renal, neural and ocular complications, as well as in the triggering of inflammatory and neurodegenerative diseases. Therefore, AGEs/ALEs detection, quantification and, in some cases, the assessment of the extent of glycation in biomolecules of different matrices represent a factor of primary interest for science. Reliable analytical methods are thus required. Together with basic concepts, this review presents the main advances, challenges and prospects of research involving AGEs and ALEs in biological and food systems, exploring practical strategies to ensure greater reliability in the analysis of these compounds in different matrices.

  10. A Computational Study of the Boundary Value Methods and the Block Unification Methods for y″=f(x,y,y′

    Directory of Open Access Journals (Sweden)

    T. A. Biala

    2016-01-01

    Full Text Available We derive a new class of linear multistep methods (LMMs via the interpolation and collocation technique. We discuss the use of these methods as boundary value methods and block unification methods for the numerical approximation of the general second-order initial and boundary value problems. The convergence of these families of methods is also established. Several test problems are given to show a computational comparison of these methods in terms of accuracy and the computational efficiency.

  11. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  12. Natural hazards in the karst areas of the Viñales National Park, Cuba

    Science.gov (United States)

    Govea Blanco, Darlenys; Farfan Gonzalez, Hermes; Dias Guanche, Carlos; Parise, Mario; Ramirez, Robert

    2010-05-01

    discharges have been mapped on the basis of the outcomes from inquiries carried out in the villages of the area, and of the documentation recorded in the Viñales National Park archives since the time of its foundation in year 2000. Slope movements in karst are quite difficult to map and survey, given the wilderness of the area. Thus, different methodologies were applied at this aim. Mass movements were mapped by using the PNUMA-FAO method, that allows to map the erosional features based upon a matrix analysis, and the results were checked in the field, and processed by means of GIS. As before mentioned, natural hazards from meteorological events are the most dangerous, even because of the peculiar characters of karst landforms, and the hydrologic recharge of karst territories. For instance, arrival of waters from allochtonous, non karst, territories has a great influence on the overall amount of water that is present in karst, both at the surface and underground, and the discharge from karst springs or rivers is strongly dependant upon such waters. Many caves are also conditioned by the presence of water, and periodically may become flooded, especially when located at the mountain or mogote foothills, well within the areas morelikely to be inundated. At the same time, flood occurrence greatly affects the anthropogenic activities, and is often at the origin of the main damage recorded to man and the human society. The other cited natural hazards are by far less disruptive to man, and cause minor damage when compared to floods. This because the great majority of mass movements and erosional phenomena have to be registered in sectors where the presence of man and his activities is much lower, so that economic activities are less affected; lightnings, on the other hand, are at the origin of wildfires generally limited to the highest peaks and mogotes (residual hills and ridges in Cuban tropical karst), once again rarely affecting man's activities.

  13. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  14. THE METHOD OF DESIGNING ASSISTED ON COMPUTER OF THE

    Directory of Open Access Journals (Sweden)

    LUCA Cornelia

    2015-05-01

    Full Text Available To the base of the footwear soles designing, is the shoe last. The shoe lasts have irregular shapes, with various curves witch can’t be represented by a simple mathematic function. In order to design the footwear’s soles it’s necessary to take from the shoe last some base contours. These contours are obtained with high precision in a 3D CAD system. In the paper, it will be presented a method of designing of the soles for footwear, computer assisted. The copying process of the shoe last is done using the 3D digitizer. For digitizing, the shoe last spatial shape is positioned on the peripheral of data gathering, witch follows automatically the shoe last’s surface. The wire network obtained through digitizing is numerically interpolated with the interpolator functions in order to obtain the spatial numerical shape of the shoe last. The 3D designing of the sole will be realized on the numerical shape of the shoe last following the next steps: the manufacture of the sole’s surface, the lateral surface realization of the sole’s shape, obtaining the link surface between the lateral side and the planner one of the sole, of the sole’s margin, the sole’s designing contains the skid proof area. The main advantage of the designing method is the design precision, visualization in 3D space of the sole and the possibility to take the best decision viewing the acceptance of new sole’s pattern.

  15. Nuevos puñales ibéricos en Andalucía (1 puñales de frontón

    Directory of Open Access Journals (Sweden)

    Quesada Sanz, Fernando

    1999-12-01

    Full Text Available In this paper we publish a group of 'fronton type' daggers from different sites in Andalusia. This lot nearly doubles the pre-existing catalogue of fronton-daggers in that region, and reinforces our hypothesis that this type was originated and produced in Southern Iberia. We also include an illustrated catalogue of already known examples to facilitate comparison. In the second part of this work we will attempt the same work with the 'atrophied antennae' or 'Alcacer do Sal' type of daggers.

    Se presentan en este artículo una serie de nuevos puñales procedentes de distintos yacimientos andaluces, del tipo conocido como 'de frontón exento'. El lote casi duplica el catálogo de los hasta ahora conocidos en esa región, y viene a confirmar el origen y producción meridionales de este tipo de puñales. Se incluye además por vez primera un catálogo gráfico de las principales piezas hasta ahora conocidas, para facilitar las comparaciones. En la segunda parte de este estudio se realizará idéntico trabajo para los puñales llamados 'de antenas atrofiadas' o 'de tipo Alcacer do Sal'.

  16. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  17. Analysis of high-tech methods of illegal remote computer data access

    OpenAIRE

    Polyakov, V. V.; Slobodyan, S. М.

    2007-01-01

    The analysis of high-tech methods of committing crimes in the sphere of computer information has been performed. The crimes were practically committed from remote computers. Virtual traces left at realisation of such methods are revealed. Specific proposals in investigation and prevention of the given type computer entry are developed.

  18. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  19. Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2011-10-11

    Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

  20. A computational method for the solution of one-dimensional ...

    Indian Academy of Sciences (India)

    ... tanh method, sinh–cosh method, homotopy analysis method. (HAM), variational iteration method (VIM) and homotopy perturbation method (HPM). The homotopy perturbation method (HPM) was established by Ji-Huan He in 1999. The method has been used by many researchers to analyse a wide variety of scientific.

  1. The Repeated Replacement Method: A Pure Lagrangian Meshfree Method for Computational Fluid Dynamics

    Science.gov (United States)

    Walker, Wade A.

    2012-01-01

    In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids’ tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly “chops out” fluid from active areas and replaces it with new “flattened” fluid cells with the same mass, momentum, and energy. We call the new cells “flattened” because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175

  2. Stability profile of flavour-active ester compounds in ale and lager ...

    African Journals Online (AJOL)

    Currently, one of the main quality problems of beer is the change of its chemical composition during storage, which alters its sensory properties. In this study, ale and lager beers were produced and aged for three months at two storage temperatures. Concentration of volatile ester compounds (VECs) in the beers was ...

  3. ALE scalar-flat K\\"ahler metrics on non-compact weighted projective spaces

    OpenAIRE

    Apostolov, Vestislav; Rollin, Yann

    2015-01-01

    We construct new explicit toric scalar-flat K{\\"a}hler ALE metrics on weighted projective spaces of non-compact type, which we use to obtain smooth extremal K{\\"a}hler metrics on appropriate resolutions of orbifolds. In particular, we obtain new extremal metrics certain resolutions of weighted projective spaces of compact type.

  4. Stability profile of flavour-active ester compounds in ale and lager ...

    African Journals Online (AJOL)

    User

    2013-01-30

    Jan 30, 2013 ... Currently, one of the main quality problems of beer is the change of its chemical composition during storage, which alters its sensory properties. In this study, ale and lager beers were produced and aged for three months at two storage temperatures. Concentration of volatile ester compounds (VECs) in the.

  5. La dialyse péritonéale chez les patients de moins de vingt ans ...

    African Journals Online (AJOL)

    La dialyse péritonéale chez les patients de moins de vingt ans: expérience d'un centre hospitalier universitaire marocain. Intissar Haddiya, Hakima Rhou, Fatima Ezaitouni, Naima Ouzeddoun, Rabia Bayahia, Loubna Benamar ...

  6. Accuracy, resolution, and computational complexity of a discontinuous Galerkin finite element method

    NARCIS (Netherlands)

    van der Ven, H.; van der Vegt, Jacobus J.W.; Cockburn, B.; Karniadakis, G.E.; Shu, C.W.

    2000-01-01

    This series contains monographs of lecture notes type, lecture course material, and high-quality proceedings on topics described by the term "computational science and engineering". This includes theoretical aspects of scientific computing such as mathematical modeling, optimization methods,

  7. Modelling of dusty plasma properties by computer simulation methods

    Energy Technology Data Exchange (ETDEWEB)

    Baimbetov, F B [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Ramazanov, T S [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Dzhumagulova, K N [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Kadyrsizov, E R [Institute for High Energy Densities of RAS, Izhorskaya 13/19, Moscow 125412 (Russian Federation); Petrov, O F [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Gavrikov, A V [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan)

    2006-04-28

    Computer simulation of dusty plasma properties is performed. The radial distribution functions, the diffusion coefficient are calculated on the basis of the Langevin dynamics. A comparison with the experimental data is made.

  8. Computational Methods for Predictive Simulation of Stochastic Turbulence Systems

    Science.gov (United States)

    2015-11-05

    computing time (even weeks) while performing enough realizations to generate a full PDF can require thousands of realizations. This is the fundamantal and...grant or contract. Catalin Trenchea Program Manager The AFOSR Program Manager currently assigned to the award Fariba Fahroo Reporting Period Start...Program Manager , if any: Jean-Luc Cambier Program Officer, Computational Mathematics, AFOSR/RTA 875 N. Randolph St., Suite 325, Room 4104, Arlington

  9. Computational Fluid Dynamics. [numerical methods and algorithm development

    Science.gov (United States)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  10. High performance computing and quantum trajectory method in CPU and GPU systems

    International Nuclear Information System (INIS)

    Wiśniewska, Joanna; Sawerwain, Marek; Leoński, Wiesław

    2015-01-01

    Nowadays, a dynamic progress in computational techniques allows for development of various methods, which offer significant speed-up of computations, especially those related to the problems of quantum optics and quantum computing. In this work, we propose computational solutions which re-implement the quantum trajectory method (QTM) algorithm in modern parallel computation environments in which multi-core CPUs and modern many-core GPUs can be used. In consequence, new computational routines are developed in more effective way than those applied in other commonly used packages, such as Quantum Optics Toolbox (QOT) for Matlab or QuTIP for Python

  11. Decodificación de Movimientos Individuales de los Dedos y Agarre a Partir de Señales Mioeléctricas de Baja Densidad

    Directory of Open Access Journals (Sweden)

    John J. Villarejo Mayor

    2017-04-01

    of able-bodied subjects. Different methods were analyzed to classify individual fingers flexion, hand gestures and different grasps using four electrodes and considering the low level of muscle contraction in these tasks. Multiple features of sEMG signals were also analyzed considering traditional magnitude-based features and fractal analysis. Statistical significance was computed for all the methods using different set of features, for both groups of subjects (able-bodied and amputees. For amputees, results showed accuracy up to 99.4% for individual finger movements, higher than the achieved by grasp movements, up to 93.3%. Best performance was achieved using support vector machine (SVM, followed very closely by K-nearest neighbors (KNN. However, KNN produces a better global performance because it is faster than SVM, which implies an advantage for real-time applications. The results show that the method here proposed is suitable for accurately controlling dexterous prosthetic hands, providing more functionality and better acceptance for amputees. Palabras clave: Señales electromiográficas, prótesis de miembro superior, reconocimiento de patrones, tareas de destreza de la mano, Keywords: Myoelectric signals, upper-limb prosthesis, superficial electromyography low density, dexterous hand gestures, pattern recognition

  12. Modeling Three-Dimensional Shock Initiation of PBX 9501 in ALE3D

    Energy Technology Data Exchange (ETDEWEB)

    Leininger, L; Springer, H K; Mace, J; Mas, E

    2008-07-08

    A recent SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has provided 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate and study code predictions. These SMIS tests used a powder gun to shoot scaled NATO standard fragments into a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. This SMIS real-world shot scenario creates a unique test-bed because (1) SMIS tests facilitate the investigation of 3D Shock to Detonation Transition (SDT) within the context of a considerable suite of diagnostics, and (2) many of the fragments arrive at the impact plate off-center and at an angle of impact. A particular goal of these model validation experiments is to demonstrate the predictive capability of the ALE3D implementation of the Tarver-Lee Ignition and Growth reactive flow model [2] within a fully 3-dimensional regime of SDT. The 3-dimensional Arbitrary Lagrange Eulerian (ALE) hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations reproduce observed 'Go/No-Go' 3D Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied for the response of heterogeneous high explosives in the SDT regime.

  13. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  14. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.

  15. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  16. Method for simulating paint mixing on computer monitors

    Science.gov (United States)

    Carabott, Ferdinand; Lewis, Garth; Piehl, Simon

    2002-06-01

    Computer programs like Adobe Photoshop can generate a mixture of two 'computer' colors by using the Gradient control. However, the resulting colors diverge from the equivalent paint mixtures in both hue and value. This study examines why programs like Photoshop are unable to simulate paint or pigment mixtures, and offers a solution using Photoshops existing tools. The article discusses how a library of colors, simulating paint mixtures, is created from 13 artists' colors. The mixtures can be imported into Photoshop as a color swatch palette of 1248 colors and as 78 continuous or stepped gradient files, all accessed in a new software package, Chromafile.

  17. Comparison of four classification methods for brain-computer interface

    Czech Academy of Sciences Publication Activity Database

    Frolov, A.; Húsek, Dušan; Bobrov, P.

    2011-01-01

    Roč. 21, č. 2 (2011), s. 101-115 ISSN 1210-0552 R&D Projects: GA MŠk(CZ) 1M0567; GA ČR GA201/05/0079; GA ČR GAP202/10/0262 Institutional research plan: CEZ:AV0Z10300504 Keywords : brain computer interface * motor imagery * visual imagery * EEG pattern classification * Bayesian classification * Common Spatial Patterns * Common Tensor Discriminant Analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 0.646, year: 2011

  18. A new computer method for temperature measurement based on an optimal control problem

    NARCIS (Netherlands)

    Damean, N.; Houkes, Z.; Regtien, Paulus P.L.

    1996-01-01

    A new computer method to measure extreme temperatures is presented. The method reduces the measurement of the unknown temperature to the solving of an optimal control problem, using a numerical computer. Based on this method, a new device for temperature measurement is built. It consists of a

  19. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  20. A comparison of methods for the assessment of postural load and duration of computer use

    NARCIS (Netherlands)

    Heinrich, J.; Blatter, B.M.; Bongers, P.M.

    2004-01-01

    Aim: To compare two different methods for assessment of postural load and duration of computer use in office workers. Methods: The study population existed of 87 computer workers. Questionnaire data about exposure were compared with exposures measured by a standardised or objective method. Measuring

  1. Numerical computation methods for magnet design of spectrometer, accelerator and beam transport systems

    International Nuclear Information System (INIS)

    Fan Mingwu; Maio Yixin

    1986-01-01

    High calculation accuracy is expected in the design of spectrometer, accelerator or beam transport systems. Three dimensional electromagnetic field computation is needed in some cases. In solving these problems, numerical computation methods have been dominating in the area. Advantages and disadvantages among the methods are discussed and errors between computed and measured values are analysised. The application of making full use of these methods is discussed based on some practical models

  2. Computational methods to dissect cis-regulatory transcriptional ...

    Indian Academy of Sciences (India)

    www.ias.ac.in/article/fulltext/jbsc/032/07/1325-1330 ... The integration of computer science with biology has expedited molecular modelling and processing of large-scale data inputs such as microarrays, analysis of genomes, transcriptomes ...

  3. An affective music player: Methods and models for physiological computing

    NARCIS (Netherlands)

    Janssen, J.H.; Westerink, J.H.D.M.; van den Broek, Egon

    2009-01-01

    Affective computing is embraced by many to create more intelligent systems and smart environments. In this thesis, a specific affective application is envisioned: an affective physiological music player (APMP), which should be able to direct its user's mood. In a first study, the relationship

  4. Method for quantitative assessment of nuclear safety computer codes

    International Nuclear Information System (INIS)

    Dearien, J.A.; Davis, C.B.; Matthews, L.J.

    1979-01-01

    A procedure has been developed for the quantitative assessment of nuclear safety computer codes and tested by comparison of RELAP4/MOD6 predictions with results from two Semiscale tests. This paper describes the developed procedure, the application of the procedure to the Semiscale tests, and the results obtained from the comparison

  5. Generalized Look-Ahead Methods for Computing Stationary Densities

    OpenAIRE

    R. Anton Braun; Huiyu Li; John Stachurski

    2011-01-01

    The look-ahead estimator is used to compute densities associated with Markov processes via simulation. We study a framework that extends the look-ahead estimator to a much broader range of applications. We provide a general asymptotic theory for the estimator, where both L1 consistency and L2 asymptotic normality are established.

  6. Computer methods in designing tourist equipment for people with disabilities

    Science.gov (United States)

    Zuzda, Jolanta GraŻyna; Borkowski, Piotr; Popławska, Justyna; Latosiewicz, Robert; Moska, Eleonora

    2017-11-01

    Modern technologies enable disabled people to enjoy physical activity every day. Many new structures are matched individually and created for people who fancy active tourism, giving them wider opportunities for active pastime. The process of creating this type of devices in every stage, from initial design through assessment to validation, is assisted by various types of computer support software.

  7. Forest Fire History... A Computer Method of Data Analysis

    Science.gov (United States)

    Romain M. Meese

    1973-01-01

    A series of computer programs is available to extract information from the individual Fire Reports (U.S. Forest Service Form 5100-29). The programs use a statistical technique to fit a continuous distribution to a set of sampled data. The goodness-of-fit program is applicable to data other than the fire history. Data summaries illustrate analysis of fire occurrence,...

  8. Verifying a computational method for predicting extreme ground motion

    Science.gov (United States)

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  9. Computational methods to dissect cis-regulatory transcriptional ...

    Indian Academy of Sciences (India)

    The formation of diverse cell types from an invariant set of genes is governed by biochemical and molecular processes that regulate gene activity. A complete understanding of the regulatory mechanisms of gene expression is the major function of genomics. Computational genomics is a rapidly emerging area for ...

  10. Computed radiography imaging plates and associated methods of manufacture

    Science.gov (United States)

    Henry, Nathaniel F.; Moses, Alex K.

    2015-08-18

    Computed radiography imaging plates incorporating an intensifying material that is coupled to or intermixed with the phosphor layer, allowing electrons and/or low energy x-rays to impart their energy on the phosphor layer, while decreasing internal scattering and increasing resolution. The radiation needed to perform radiography can also be reduced as a result.

  11. New Methods of Mobile Computing: From Smartphones to Smart Education

    Science.gov (United States)

    Sykes, Edward R.

    2014-01-01

    Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It…

  12. An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Randal Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-22

    CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest and emerging HPC systems.

  13. Computational Nuclear Physics and Post Hartree-Fock Methods

    Science.gov (United States)

    Lietz, Justin G.; Novario, Samuel; Jansen, Gustav R.; Hagen, Gaute; Hjorth-Jensen, Morten

    We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of Chaps. 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in Chap. 9, the in-medium similarity renormalization group theory of dense fermionic systems of Chap. 10 and the Green's function approach in Chap. 11 We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions on strategies for porting the code to present and planned high-performance computing facilities.

  14. Computational Nuclear Physics and Post Hartree-Fock Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lietz, Justin [Michigan State University; Sam, Novario [Michigan State University; Hjorth-Jensen, M. [University of Oslo, Norway; Hagen, Gaute [ORNL; Jansen, Gustav R. [ORNL

    2017-05-01

    We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions on strategies for porting the code to present and planned high-performance computing facilities.

  15. Inferring biological functions of guanylyl cyclases with computational methods

    KAUST Repository

    Alquraishi, May Majed

    2013-09-03

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  16. Methods for the development of large computer codes under LTSS

    International Nuclear Information System (INIS)

    Sicilian, J.M.

    1977-06-01

    TRAC is a large computer code being developed by Group Q-6 for the analysis of the transient thermal hydraulic behavior of light-water nuclear reactors. A system designed to assist the development of TRAC is described. The system consists of a central HYDRA dataset, R6LIB, containing files used in the development of TRAC, and a file maintenance program, HORSE, which facilitates the use of this dataset

  17. A theoretical method for assessing disruptive computer viruses

    Science.gov (United States)

    Wu, Yingbo; Li, Pengdeng; Yang, Lu-Xing; Yang, Xiaofan; Tang, Yuan Yan

    2017-09-01

    To assess the prevalence of disruptive computer viruses in the situation that every node in a network has its own virus-related attributes, a heterogeneous epidemic model is proposed. A criterion for the global stability of the virus-free equilibrium and a criterion for the existence of a unique viral equilibrium are given, respectively. Furthermore, extensive simulation experiments are conducted, and some interesting phenomena are found from the experimental results. On this basis, some policies of suppressing disruptive viruses are recommended.

  18. The cell method a purely algebraic computational method in physics and engineering

    CERN Document Server

    Ferretti, Elena

    2014-01-01

    The Cell Method (CM) is a computational tool that maintains critical multidimensional attributes of physical phenomena in analysis. This information is neglected in the differential formulations of the classical approaches of finite element, boundary element, finite volume, and finite difference analysis, often leading to numerical instabilities and spurious results. This book highlights the central theoretical concepts of the CM that preserve a more accurate and precise representation of the geometric and topological features of variables for practical problem solving. Important applications occur in fields such as electromagnetics, electrodynamics, solid mechanics and fluids. CM addresses non-locality in continuum mechanics, an especially important circumstance in modeling heterogeneous materials. Professional engineers and scientists, as well as graduate students, are offered: A general overview of physics and its mathematical descriptions; Guidance on how to build direct, discrete formulations; Coverag...

  19. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  20. A computational method for the solution of one-dimensional ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, one of the newest analytical methods, new homotopy perturbation method. (NHPM), is considered to solve thermoelasticity equations. Results obtained by NHPM, which does not need small parameters, are compared with the numerical results and a very good agreement is found. This method ...

  1. A comparison of direct and indirect analytical methods of computing ...

    African Journals Online (AJOL)

    The first step in the analysis of gravity anomalies for mineral exploration is the extraction of residual gravity anomalies from the observed gravity anomalies. This can be achieved by graphical or analytical methods. Generally, direct and indirect analytical methods are considered better than graphical methods. Telford et al ...

  2. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  3. Platinum nanofilm formation by EC-ALE via redox replacement of UPD copper: studies using in-situ scanning tunneling microscopy.

    Science.gov (United States)

    Kim, Youn-Geun; Kim, Jay Y; Vairavapandian, Deepa; Stickney, John L

    2006-09-14

    The growth of Pt nanofilms on well-defined Au(111) electrode surfaces, using electrochemical atomic layer epitaxy (EC-ALE), is described here. EC-ALE is a deposition method based on surface-limited reactions. This report describes the first use of surface-limited redox replacement reactions (SLR(3)) in an EC-ALE cycle to form atomically ordered metal nanofilms. The SLR(3) consisted of the underpotential deposition (UPD) of a copper atomic layer, subsequently replaced by Pt at open circuit, in a Pt cation solution. This SLR(3) was then used a cycle, repeated to grow thicker Pt films. Deposits were studied using a combination of electrochemistry (EC), in-situ scanning tunneling microscopy (STM) using an electrochemical flow cell, and ultrahigh vacuum (UHV) surface studies combined with electrochemistry (UHV-EC). A single redox replacement of upd Cu from a PtCl(4)(2-) solution yielded an incomplete monolayer, though no preferential deposition was observed at step edges. Use of an iodine adlayer, as a surfactant, facilitated the growth of uniformed films. In-situ STM images revealed ordered Au(111)-(square root 3 x square root 3)R30 degrees-iodine structure, with areas partially distorted by Pt nanoislands. After the second application, an ordered Moiré pattern was observed with a spacing consistent with the lattice mismatch between a Pt monolayer and the Au(111) substrate. After application of three or more cycles, a new adlattice, a (3 x 3)-iodine structure, was observed, previously observed for I atoms adsorbed on Pt(111). In addition, five atom adsorbed Pt-I complexes randomly decorated the surface and showed some mobility. These pinwheels, planar PtI(4) complexes, and the ordered (3 x 3)-iodine layer all appeared stable during rinsing with blank solution, free of I(-) and the Pt complex (PtCl(4)(2-)).

  4. Advanced Computational Methods for Thermal Radiative Heat Transfer

    Energy Technology Data Exchange (ETDEWEB)

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.; Hogan, Roy E.,

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.

  5. Improved methods for computing masses from numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kronfeld, A.S.

    1989-11-22

    An important advance in the computation of hadron and glueball masses has been the introduction of non-local operators. This talk summarizes the critical signal-to-noise ratio of glueball correlation functions in the continuum limit, and discusses the case of (q{bar q} and qqq) hadrons in the chiral limit. A new strategy for extracting the masses of excited states is outlined and tested. The lessons learned here suggest that gauge-fixed momentum-space operators might be a suitable choice of interpolating operators. 15 refs., 2 tabs.

  6. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  7. The null-event method in computer simulation

    International Nuclear Information System (INIS)

    Lin, S.L.

    1978-01-01

    The simulation of collisions of ions moving under the influence of an external field through a neutral gas to non-zero temperatures is discussed as an example of computer models of processes in which a probe particle undergoes a series of interactions with an ensemble of other particles, such that the frequency and outcome of the events depends on internal properties of the second particles. The introduction of null events removes the need for much complicated algebra, leads to a more efficient simulation and reduces the likelihood of logical error. (Auth.)

  8. Shielding analysis methods available in the scale computational system

    Energy Technology Data Exchange (ETDEWEB)

    Parks, C.V.; Tang, J.S.; Hermann, O.W.; Bucholz, J.A.; Emmett, M.B.

    1986-01-01

    Computational tools have been included in the SCALE system to allow shielding analysis to be performed using both discrete-ordinates and Monte Carlo techniques. One-dimensional discrete ordinates analyses are performed with the XSDRNPM-S module, and point dose rates outside the shield are calculated with the XSDOSE module. Multidimensional analyses are performed with the MORSE-SGC/S Monte Carlo module. This paper will review the above modules and the four Shielding Analysis Sequences (SAS) developed for the SCALE system. 7 refs., 8 figs.

  9. Lattice QCD computations: Recent progress with modern Krylov subspace methods

    Energy Technology Data Exchange (ETDEWEB)

    Frommer, A. [Bergische Universitaet GH Wuppertal (Germany)

    1996-12-31

    Quantum chromodynamics (QCD) is the fundamental theory of the strong interaction of matter. In order to compare the theory with results from experimental physics, the theory has to be reformulated as a discrete problem of lattice gauge theory using stochastic simulations. The computational challenge consists in solving several hundreds of very large linear systems with several right hand sides. A considerable part of the world`s supercomputer time is spent in such QCD calculations. This paper presents results on solving systems for the Wilson fermions. Recent progress is reviewed on algorithms obtained in cooperation with partners from theoretical physics.

  10. An accurate and efficient computation method of the hydration free energy of a large, complex molecule.

    Science.gov (United States)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-07

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  11. Computational Methods for Sparse Solution of Linear Inverse Problems

    Science.gov (United States)

    2009-03-01

    methods from harmonic analysis [5]. For example, natural images can be approximated with relatively few wavelet coefficients. As a consequence, in many...ascent algorithms (see Section III-F). Similarly, certain methods for convex relaxation, such as LARS [29] and homotopy [30], use a type of greedy...closest value of β where the derivative of the piecewise-linear function changes. The homotopy method of Osborne, Presnell, and Turlach [30] follows

  12. Validating Computer Security Methods: Meta-methodology for an Adversarial Science

    OpenAIRE

    Roque, Antonio

    2017-01-01

    How can we justify the validity of our computer security methods? This meta-methodological question is related to recent explorations on the science of computer security, which have been hindered by computer security's unique properties. We confront this by developing a taxonomy of properties and methods. Interdisciplinary foundations provide a solid grounding for a set of essential concepts, including a decision tree for characterizing adversarial interaction. Several types of invalidation a...

  13. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    Science.gov (United States)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  14. A finite element method for the computation of transonic flow past airfoils

    Science.gov (United States)

    Eberle, A.

    1980-01-01

    A finite element method for the computation of the transonic flow with shocks past airfoils is presented using the artificial viscosity concept for the local supersonic regime. Generally, the classic element types do not meet the accuracy requirements of advanced numerical aerodynamics requiring special attention to the choice of an appropriate element. A series of computed pressure distributions exhibits the usefulness of the method.

  15. A linear perturbation computation method applied to hydrodynamic instability growth predictions in ICF targets

    International Nuclear Information System (INIS)

    Clarisse, J.M.; Boudesocque-Dubois, C.; Leidinger, J.P.; Willien, J.L.

    2006-01-01

    A linear perturbation computation method is used to compute hydrodynamic instability growth in model implosions of inertial confinement fusion direct-drive and indirect-drive designed targets. Accurate descriptions of linear perturbation evolutions for Legendre mode numbers up to several hundreds have thus been obtained in a systematic way, motivating further improvements of the physical modeling currently handled by the method. (authors)

  16. Improved fixed point iterative method for blade element momentum computations

    DEFF Research Database (Denmark)

    Sun, Zhenye; Shen, Wen Zhong; Chen, Jin

    2017-01-01

    The blade element momentum (BEM) theory is widely used in aerodynamic performance calculations and optimization applications for wind turbines. The fixed point iterative method is the most commonly utilized technique to solve the BEM equations. However, this method sometimes does not converge...

  17. Numerical computation of FCT equilibria by inverse equilibrium method

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki

    1986-11-01

    FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)

  18. Computation Method Comparison for Th Based Seed-Blanket Cores

    International Nuclear Information System (INIS)

    Kolesnikov, S.; Galperin, A.; Shwageraus, E.

    2004-01-01

    This work compares two methods for calculating a given nuclear fuel cycle in the WASB configuration. Both methods use the ELCOS Code System (2-D transport code BOXER and 3-D nodal code SILWER) [4] are compared. In the first method, the cross-sections of the Seed and Blanket, needed for the 3-D nodal code are generated separately for each region by the 2-D transport code. In the second method, the cross-sections of the Seed and Blanket, needed for the 3-D nodal code are generated from Seed-Blanket Colorsets (Fig.1) calculated by the 2-D transport code. The evaluation of the error introduced by the first method is the main objective of the present study

  19. Simple and fast method for step size determination in computations of signal propagation through nonlinear fibres

    DEFF Research Database (Denmark)

    Rasmussen, Christian Jørgen

    2001-01-01

    Presents a simple and fast method for determination of the step size that exactly leads to a prescribed accuracy when signal propagation through nonlinear optical fibres is computed using the split-step Fourier method.......Presents a simple and fast method for determination of the step size that exactly leads to a prescribed accuracy when signal propagation through nonlinear optical fibres is computed using the split-step Fourier method....

  20. Magneto Hydrodynamic Simulations of a Magnetic Flux Compression Generator Using ALE3D

    Science.gov (United States)

    2017-07-13

    Simulations of a Magnetic Flux Compression Generator Using ALE3D by George B Vunni Weapons and Materials Research Directorate, ARL...current (the magnetic energy is only correct for materials with constant value of permeability). However, it is difficult to accurately measure the...ARL-TR-8055 ● JULY 2017 US Army Research Laboratory Magneto-Hydrodynamic Simulations of a Magnetic Flux Compression Generator

  1. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  2. Medical Data Probabilistic Analysis by Optical Computing Methods

    Directory of Open Access Journals (Sweden)

    Alexander LARKIN

    2014-06-01

    Full Text Available The purpose of this article to show the laser coherent photonics methods can be use for classification of medical information. It is shown that the holography methods can be used not only for work with images. Holographic methods can be used for processing of information provided in the universal multi-parametric form. It is shown that along with the usual correlation algorithm enable to realize a number of algorithms of classification: searching for a precedent, Hamming distance measurement, Bayes probability algorithm, deterministic and “correspondence” algorithms. Significantly, that preserves all advantages of holographic method – speed, two-dimension, record-breaking high capacity of memory, flexibility of data processing and representation of result, high radiation resistance in comparison with electronic equipment. For example is presented the result of solving one of the problems of medical diagnostics - a forecast of organism state after mass traumatic lesions.

  3. Control de brazo electrónico usando señales electromiográficas

    Directory of Open Access Journals (Sweden)

    Jorge Andrés García-Pinzón

    2015-05-01

    Full Text Available Los trabajos enfocados en la extracción de patrones en señales electromiográficas (SEMG han venido creciendo debido a sus múltiples aplicaciones. En este artículo se presenta una aplicación en la cual se implementa un sistema electrónico para el registro de las SEMG de la extremidad superior en un sujeto, con el fin de controlar de forma remota un brazo electrónico. Se realizó una etapa de preprocesamiento de las señales registradas, para eliminar información poco relevante, y reconocimiento de zonas de interés; enseguida se extraen los patrones y se clasifican. Las técnicas utilizadas fueron: análisis wavelet (AW, análisis de componentes principales (ACP, transformada de fourier (TF, transformada del coseno discreta (TDC, energía, máquinas de soporte vectorial (MSV o SVM y redes neuronales (RNA. En este artículo se demuestra que la metodología planteada permite realizar un proceso de clasificación con un rendimiento superior al 95%. Se registraron más de 4000 señales.

  4. Microbial diversity and metabolite composition of Belgian red-brown acidic ales.

    Science.gov (United States)

    Snauwaert, Isabel; Roels, Sanne P; Van Nieuwerburg, Filip; Van Landschoot, Anita; De Vuyst, Luc; Vandamme, Peter

    2016-03-16

    Belgian red-brown acidic ales are sour and alcoholic fermented beers, which are produced by mixed-culture fermentation and blending. The brews are aged in oak barrels for about two years, after which mature beer is blended with young, non-aged beer to obtain the end-products. The present study evaluated the microbial community diversity of Belgian red-brown acidic ales at the end of the maturation phase of three subsequent brews of three different breweries. The microbial diversity was compared with the metabolite composition of the brews at the end of the maturation phase. Therefore, mature brew samples were subjected to 454 pyrosequencing of the 16S rRNA gene (bacteria) and the internal transcribed spacer region (yeasts) and a broad range of metabolites was quantified. The most important microbial species present in the Belgian red-brown acidic ales investigated were Pediococcus damnosus, Dekkera bruxellensis, and Acetobacter pasteurianus. In addition, this culture-independent analysis revealed operational taxonomic units that were assigned to an unclassified fungal community member, Candida, and Lactobacillus. The main metabolites present in the brew samples were L-lactic acid, D-lactic acid, and ethanol, whereas acetic acid was produced in lower quantities. The most prevailing aroma compounds were ethyl acetate, isoamyl acetate, ethyl hexanoate, and ethyl octanoate, which might be of impact on the aroma of the end-products. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Magmatic architecture within a rift segment: Articulate axial magma storage at Erta Ale volcano, Ethiopia

    Science.gov (United States)

    Xu, Wenbin; Rivalta, Eleonora; Li, Xing

    2017-10-01

    Understanding the magmatic systems beneath rift volcanoes provides insights into the deeper processes associated with rift architecture and development. At the slow spreading Erta Ale segment (Afar, Ethiopia) transition from continental rifting to seafloor spreading is ongoing on land. A lava lake has been documented since the twentieth century at the summit of the Erta Ale volcano and acts as an indicator of the pressure of its magma reservoir. However, the structure of the plumbing system of the volcano feeding such persistent active lava lake and the mechanisms controlling the architecture of magma storage remain unclear. Here, we combine high-resolution satellite optical imagery and radar interferometry (InSAR) to infer the shape, location and orientation of the conduits feeding the 2017 Erta Ale eruption. We show that the lava lake was rooted in a vertical dike-shaped reservoir that had been inflating prior to the eruption. The magma was subsequently transferred into a shallower feeder dike. We also find a shallow, horizontal magma lens elongated along axis inflating beneath the volcano during the later period of the eruption. Edifice stress modeling suggests the hydraulically connected system of horizontal and vertical thin magmatic bodies able to open and close are arranged spatially according to stresses induced by loading and unloading due to topographic changes. Our combined approach may provide new constraints on the organization of magma plumbing systems beneath volcanoes in continental and marine settings.

  6. Three dimensional reconstruction of computed tomographic images by computer graphics method

    International Nuclear Information System (INIS)

    Kashiwagi, Toru; Kimura, Kazufumi.

    1986-01-01

    A three dimensional computer reconstruction system for CT images has been developed in a commonly used radionuclide data processing system using a computer graphics technique. The three dimensional model was constructed from organ surface information of CT images (slice thickness: 5 or 10 mm). Surface contours of the organs were extracted manually from a set of parallel transverse CT slices in serial order and stored in the computer memory. Interpolation was made between a set of the extracted contours by cubic spline functions, then three dimensional models were reconstructed. The three dimensional images were displayed as a wire-frame and/or solid models on the color CRT. Solid model images were obtained as follows. The organ surface constructed from contours was divided into many triangular patches. The intensity of light to each patch was calculated from the direction of incident light, eye position and the normal to the triangular patch. Firstly, this system was applied to the liver phantom. Reconstructed images of the liver phantom were coincident with the actual object. This system also has been applied to human various organs such as brain, lung, liver, etc. The anatomical organ surface was realistically viewed from any direction. The images made us more easily understand the location and configuration of organs in vivo than original CT images. Furthermore, spacial relationship among organs and/or lesions was clearly obtained by superimposition of wire-frame and/or different colored solid models. Therefore, it is expected that this system is clinically useful for evaluating the patho-morphological changes in broad perspective. (author)

  7. The moduli space of instantons on an ALE space from 3d $\\mathcal{N}=4$ field theories

    CERN Document Server

    Mekareeya, Noppadol

    2015-01-01

    The moduli space of instantons on an ALE space is studied using the moduli space of $\\mathcal{N}=4$ field theories in three dimensions. For instantons in a simple gauge group $G$ on $\\mathbb{C}^2/\\mathbb{Z}_n$, the Hilbert series of such an instanton moduli space is computed from the Coulomb branch of the quiver given by the affine Dynkin diagram of $G$ with flavour nodes of unitary groups attached to various nodes of the Dynkin diagram. We provide a simple prescription to determine the ranks and the positions of these flavour nodes from the order of the orbifold $n$ and from the residual subgroup of $G$ that is left unbroken by the monodromy of the gauge field at infinity. For $G$ a simply laced group of type $A$, $D$ or $E$, the Higgs branch of such a quiver describes the moduli space of instantons in projective unitary group $PU(n) \\cong U(n)/U(1)$ on orbifold $\\mathbb{C}^2/\\hat{G}$, where $\\hat{G}$ is the discrete group that is in McKay correspondence to $G$. Moreover, we present the quiver whose Coulomb ...

  8. Computer methods for transient fluid-structure analysis of nuclear reactors

    International Nuclear Information System (INIS)

    Belytschko, T.; Liu, W.K.

    1985-01-01

    Fluid-structure interaction problems in nuclear engineering are categorized according to the dominant physical phenomena and the appropriate computational methods. Linear fluid models that are considered include acoustic fluids, incompressible fluids undergoing small disturbances, and small amplitude sloshing. Methods available in general-purpose codes for these linear fluid problems are described. For nonlinear fluid problems, the major features of alternative computational treatments are reviewed; some special-purpose and multipurpose computer codes applicable to these problems are then described. For illustration, some examples of nuclear reactor problems that entail coupled fluid-structure analysis are described along with computational results

  9. Higher-Order Integral Equation Methods in Computational Electromagnetics

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Meincke, Peter

    Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...... by a factor of 10 in comparison to the existing technique. The hybrid technique includes the coupling between the MoM and PO regions and numerical results are presented to illustrate the accuracy. The hierarchical feature of the new higher-order Legendre basis functions allows a flexible selection...

  10. Modern Electrophysiological Methods for Brain-Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Rolando Grave de Peralta Menendez

    2007-01-01

    Full Text Available Modern electrophysiological studies in animals show that the spectrum of neural oscillations encoding relevant information is broader than previously thought and that many diverse areas are engaged for very simple tasks. However, EEG-based brain-computer interfaces (BCI still employ as control modality relatively slow brain rhythms or features derived from preselected frequencies and scalp locations. Here, we describe the strategy and the algorithms we have developed for the analysis of electrophysiological data and demonstrate their capacity to lead to faster accurate decisions based on linear classifiers. To illustrate this strategy, we analyzed two typical BCI tasks. (1 Mu-rhythm control of a cursor movement by a paraplegic patient. For this data, we show that although the patient received extensive training in mu-rhythm control, valuable information about movement imagination is present on the untrained high-frequency rhythms. This is the first demonstration of the importance of high-frequency rhythms in imagined limb movements. (2 Self-paced finger tapping task in three healthy subjects including the data set used in the BCI-2003 competition. We show that by selecting electrodes and frequency ranges based on their discriminative power, the classification rates can be systematically improved with respect to results published thus far.

  11. Computational Biology Methods for Characterization of Pluripotent Cells.

    Science.gov (United States)

    Araúzo-Bravo, Marcos J

    2016-01-01

    Pluripotent cells are a powerful tool for regenerative medicine and drug discovery. Several techniques have been developed to induce pluripotency, or to extract pluripotent cells from different tissues and biological fluids. However, the characterization of pluripotency requires tedious, expensive, time-consuming, and not always reliable wet-lab experiments; thus, an easy, standard quality-control protocol of pluripotency assessment remains to be established. Here to help comes the use of high-throughput techniques, and in particular, the employment of gene expression microarrays, which has become a complementary technique for cellular characterization. Research has shown that the transcriptomics comparison with an Embryonic Stem Cell (ESC) of reference is a good approach to assess the pluripotency. Under the premise that the best protocol is a computer software source code, here I propose and explain line by line a software protocol coded in R-Bioconductor for pluripotency assessment based on the comparison of transcriptomics data of pluripotent cells with an ESC of reference. I provide advice for experimental design, warning about possible pitfalls, and guides for results interpretation.

  12. Image quality improvement in computational reconstruction of partially occluded objects using two computational integral imaging reconstruction methods

    Science.gov (United States)

    Lee, Joon-Jae; Shin, Donghak; Yoo, Hoon

    2013-09-01

    In this paper, we propose an image quality improvement method of partially occluded objects using two different computational integral imaging reconstruction (CIIR) methods. In the proposed method, we first remove the occlusion in the recorded elemental images using two different plane images which are generated from two different CIIR methods. We introduce a CIIR method based on a round-mapping model for combined use of the previous method. The difference between two plane images reconstructed at a specific distance enables us to estimate the position of the occlusion in the elemental images. The occlusion-removed elemental images are used to reconstruct the improved 3D images. We carry out some experiments and present the results to show the usefulness of the proposed method.

  13. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  14. A hybrid method for the parallel computation of Green's functions

    DEFF Research Database (Denmark)

    Petersen, Dan Erik; Li, Song; Stokbro, Kurt

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of t...

  15. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  16. Method and Apparatus for Computed Imaging Backscatter Radiography

    Science.gov (United States)

    Shedlock, Daniel (Inventor); Meng, Christopher (Inventor); Sabri, Nissia (Inventor); Dugan, Edward T. (Inventor); Jacobs, Alan M. (Inventor)

    2013-01-01

    Systems and methods of x-ray backscatter radiography are provided. A single-sided, non-destructive imaging technique utilizing x-ray radiation to image subsurface features is disclosed, capable of scanning a region using a fan beam aperture and gathering data using rotational motion.

  17. A hyperpower iterative method for computing the generalized Drazin ...

    Indian Academy of Sciences (India)

    A quadratically convergent Newton-type iterative scheme is proposed for approximating the generalized Drazin inverse bd of the Banach algebra element b. Further, its extension into the form of the hyperpower iterative method of arbitrary order p ≤ 2 is presented. Convergence criteria along with the estimation of error ...

  18. Engineering computation of structures the finite element method

    CERN Document Server

    Neto, Maria Augusta; Roseiro, Luis; Cirne, José; Leal, Rogério

    2015-01-01

    This book presents theories and the main useful techniques of the Finite Element Method (FEM), with an introduction to FEM and many case studies of its use in engineering practice. It supports engineers and students to solve primarily linear problems in mechanical engineering, with a main focus on static and dynamic structural problems. Readers of this text are encouraged to discover the proper relationship between theory and practice, within the finite element method: Practice without theory is blind, but theory without practice is sterile. Beginning with elasticity basic concepts and the classical theories of stressed materials, the work goes on to apply the relationship between forces, displacements, stresses and strains on the process of modeling, simulating and designing engineered technical systems. Chapters discuss the finite element equations for static, eigenvalue analysis, as well as transient analyses. Students and practitioners using commercial FEM software will find this book very helpful. It us...

  19. Efficient computational methods for sequence analysis of small RNAs

    OpenAIRE

    Cozen, Gozde

    2007-01-01

    With the discovery of small regulatory RNAs, there has been a tremendous increase in the number of RNA sequencing projects. Meanwhile, novel high-throughput sequencing technologies, which can sequence as much as 500000 small RNA sequences in one run, have emerged. The challenge of processing this rapidly growing data can be addressed by optimizing current analysis approaches for small RNA sequences. We present fast register-level methods for small RNA pairwise alignment and small RNA to genom...

  20. Software Components and Formal Methods from a Computational Viewpoint

    OpenAIRE

    Lambertz, Christian

    2012-01-01

    Software components and the methodology of component-based development offer a promising approach to master the design complexity of huge software products because they separate the concerns of software architecture from individual component behavior and allow for reusability of components. In combination with formal methods, the specification of a formal component model of the later software product or system allows for establishing and verifying important system properties in an automatic a...

  1. Computing multiple zeros using a class of quartically convergent methods

    Directory of Open Access Journals (Sweden)

    F. Soleymani

    2013-09-01

    For functions with finitely many real roots in an interval, relatively little literature is known, while in applications, the users wish to find all the real zeros at the same time. Hence, the second aim of this paper will be presented by designing a fourth-order algorithm, based on the developed methods, to find all the real solutions of a nonlinear equation in an interval using the programming package Mathematica 8.

  2. New computational methods for determining antikaon-nucleus bound states

    International Nuclear Information System (INIS)

    Fink, P.J. Jr.

    1989-01-01

    Optical potential for antikaon-nucleus strong interactions are constructed using elementary antikaon-nucleus potentials determined previously. The optical potentials are used to determine the existence of a kaon hypernucleus. Modern three dimensional visualization techniques are used to study model dependences, new methods for speeding the calculation of the optical potential are developed, and previous approximation to avoid full Fermi averaging are eliminated. 19 refs., 21 figs., 3 tabs

  3. Theoretical studies of potential energy surfaces and computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, R. [Argonne National Laboratory, IL (United States)

    1993-12-01

    This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. This difficult challenge is met with general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions, are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.

  4. Research on Quantum Authentication Methods for the Secure Access Control Among Three Elements of Cloud Computing

    Science.gov (United States)

    Dong, Yumin; Xiao, Shufen; Ma, Hongyang; Chen, Libo

    2016-12-01

    Cloud computing and big data have become the developing engine of current information technology (IT) as a result of the rapid development of IT. However, security protection has become increasingly important for cloud computing and big data, and has become a problem that must be solved to develop cloud computing. The theft of identity authentication information remains a serious threat to the security of cloud computing. In this process, attackers intrude into cloud computing services through identity authentication information, thereby threatening the security of data from multiple perspectives. Therefore, this study proposes a model for cloud computing protection and management based on quantum authentication, introduces the principle of quantum authentication, and deduces the quantum authentication process. In theory, quantum authentication technology can be applied in cloud computing for security protection. This technology cannot be cloned; thus, it is more secure and reliable than classical methods.

  5. A rigid motion correction method for helical computed tomography (CT)

    International Nuclear Information System (INIS)

    Kim, J-H; Kyme, A; Fulton, R; Nuyts, J; Kuncic, Z

    2015-01-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. (paper)

  6. A rigid motion correction method for helical computed tomography (CT)

    Science.gov (United States)

    Kim, J.-H.; Nuyts, J.; Kyme, A.; Kuncic, Z.; Fulton, R.

    2015-03-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data.

  7. Les insectes impliqués dans les pertes post-récolte des céréales au ...

    African Journals Online (AJOL)

    Les céréales sont à la base de l'alimentation camerounaise et sont les produits alimentaires les plus importés. Ces importations sont indispensables pour pallier aux déficits alimentaires en céréales et aux famines périodiques. Ce déficit en céréales s'explique entre autres par des pertes post-récolte dues aux insectes ...

  8. DCA opacity results computed by Monte Carlo Methods

    International Nuclear Information System (INIS)

    Wilson, B.G.; Albritton, J.R.; Liberman, D.A.

    1991-01-01

    The authors present the Monte Carlo methods employed by the code ENRICO for obtaining detailed configuration accounting calculations of LTE opacity. Sample calculations of some mid Z elements, all at experimentally accessible conditions 60ev temperature and one-one hundredth solid density, are presented to illustrate the phenomena of transition array breakup. The prediction of systematic trends in transition array breakup is proposed as a means of testing the ion stage balance produced by codes. The importance of including detailed level transitions in arrays, at least on the level of the UTA approximation, is presented, and a novel approximation for explicitly incorporating the individual transitions between configuration is discussed

  9. Translation Method and Computer Programme for Assisting the Same

    DEFF Research Database (Denmark)

    2013-01-01

    The present invention relates to a translation method comprising the steps of: a translator speaking a translation of a written source text in a target language, an automatic speech recognition system converting the spoken translation into a set of phone and word hypotheses in the target language......, a machine translation system translating the written source text into a set of translations hypotheses in the target language, and an integration module combining the set of spoken word hypotheses and the set of machine translation hypotheses obtaining a text in the target language. Thereby obtaining...

  10. Computer-aided method of airborne uranium in working areas

    International Nuclear Information System (INIS)

    Dagen, E.; Ringel, V.; Rossbach, H.

    1981-09-01

    The described procedure allows the routine determination of uranium aerosols with low personnel and technical efforts. The activity deposited on the filters is measured automatically twice a night. The computerized evaluation, including the elimination of radon and thoron daughter products, is made off-line with the aid of the code ULK1. The results are available at the beginning of the following working day and can be used for radiation protection planning. The sensitivity of the method of eliminating the airborne natural activity is 4 times less than that of measurements after its complete decay. This, however, is not of significance for radiation protection purposes

  11. Comparison of evaporation computation methods, Pretty Lake, Lagrange County, northeastern Indiana

    Science.gov (United States)

    Ficke, John F.

    1972-01-01

    Evaporation from Pretty Lake has been computed for a 2%- year period between 1963 and 1965 by the use of an energy budget, mass-transfer parameters, a water budget, a class-A pan, and a computed pan evaporation technique. The seasonal totals for the different methods are within 8 percent of their mean and are within 11 percent of the rate of 79 centimeters (31 inches) per year determined from published maps that are based on evaporation-pan data. Period-by-period differences among the methods are larger than the annual differences, but there is a general agreement among the evaporation hydrographs produced by the different computation methods.

  12. USING COMPUTER-BASED TESTING AS ALTERNATIVE ASSESSMENT METHOD OF STUDENT LEARNING IN DISTANCE EDUCATION

    Directory of Open Access Journals (Sweden)

    Amalia SAPRIATI

    2010-04-01

    Full Text Available This paper addresses the use of computer-based testing in distance education, based on the experience of Universitas Terbuka (UT, Indonesia. Computer-based testing has been developed at UT for reasons of meeting the specific needs of distance students as the following: Ø students’ inability to sit for the scheduled test, Ø conflicting test schedules, and Ø students’ flexibility to take examination to improve their grades. In 2004, UT initiated a pilot project in the development of system and program for computer-based testing method. Then in 2005 and 2006 tryouts in the use of computer-based testing methods were conducted in 7 Regional Offices that were considered as having sufficient supporting recourses. The results of the tryouts revealed that students were enthusiastic in taking computer-based tests and they expected that the test method would be provided by UT as alternative to the traditional paper and pencil test method. UT then implemented computer-based testing method in 6 and 12 Regional Offices in 2007 and 2008 respectively. The computer-based testing was administered in the city of the designated Regional Office and was supervised by the Regional Office staff. The development of the computer-based testing was initiated with conducting tests using computers in networked configuration. The system has been continually improved, and it currently uses devices linked to the internet or the World Wide Web. The construction of the test involves the generation and selection of the test items from the item bank collection of the UT Examination Center. Thus the combination of the selected items compromises the test specification. Currently UT has offered 250 courses involving the use of computer-based testing. Students expect that more courses are offered with computer-based testing in Regional Offices within easy access by students.

  13. SALE-3D, 3-D Fluid Flow, Navier Stokes Equation Using Lagrangian or Eulerian Method

    International Nuclear Information System (INIS)

    Amsden, A.A.; Ruppel, H.M.

    1991-01-01

    1 - Description of problem or function: SALE-3D calculates three- dimensional fluid flows at all speeds, from the incompressible limit to highly supersonic. An implicit treatment of the pressure calculation similar to that in the Implicit Continuous-fluid Eulerian (ICE) technique provides this flow speed flexibility. In addition, the computing mesh may move with the fluid in a typical Lagrangian fashion, be held fixed in an Eulerian manner, or move in some arbitrarily specified way to provide a continuous rezoning capability. This latitude results from use of an Arbitrary Lagrangian-Eulerian (ALE) treatment of the mesh. The partial differential equations solved are the Navier-Stokes equations and the mass and internal energy equations. The fluid pressure is determined from an equation of state and supplemented with an artificial viscous pressure for the computation of shock waves. The computing mesh consists of a three-dimensional network of arbitrarily shaped, six-sided deformable cells, and a variety of user-selectable boundary conditions are provided in the program. 2 - Method of solution: SALE3D uses an ICED-ALE technique, which combines the ICE method of treating flow speeds and the ALE mesh treatment to calculate three-dimensional fluid flow. The finite- difference approximations to the conservation of mass, momentum, and specific internal energy differential equations are solved in a sequence of time steps on a network of deformable computational cells. The basic hydrodynamic part of each cycle is divided into three phases: (1) an explicit solution of the Lagrangian equations of motion updating the velocity field by the effects of all forces, (2) an implicit calculation using Newton-Raphson iterative scheme that provides time-advanced pressures and velocities, and (3) the addition of advective contributions for runs that are Eulerian or contain some relative motion of grid and fluid. A powerful feature of this three-phases approach is the ease with which

  14. A parallel finite-difference method for computational aerodynamics

    International Nuclear Information System (INIS)

    Swisshelm, J.M.

    1989-01-01

    A finite-difference scheme for solving complex three-dimensional aerodynamic flow on parallel-processing supercomputers is presented. The method consists of a basic flow solver with multigrid convergence acceleration, embedded grid refinements, and a zonal equation scheme. Multitasking and vectorization have been incorporated into the algorithm. Results obtained include multiprocessed flow simulations from the Cray X-MP and Cray-2. Speedups as high as 3.3 for the two-dimensional case and 3.5 for segments of the three-dimensional case have been achieved on the Cray-2. The entire solver attained a factor of 2.7 improvement over its unitasked version on the Cray-2. The performance of the parallel algorithm on each machine is analyzed. 14 refs

  15. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    Models are playing important roles in design and analysis of chemicals/bio-chemicals based products and the processes that manufacture them. Model-based methods and tools have the potential to decrease the number of experiments, which can be expensive and time consuming, and point to candidates......, where the experimental effort could be focused. In this project a general modelling framework for systematic model building through modelling templates, which supports the reuse of existing models via its new model import and export capabilities, have been developed. The new feature for model transfer...... has been developed by establishing a connection with an external modelling environment for code generation. The main contribution of this thesis is a creation of modelling templates and their connection with other modelling tools within a modelling framework. The goal was to create a user...

  16. A general method for computing the total solar radiation force on complex spacecraft structures

    Science.gov (United States)

    Chan, F. K.

    1981-01-01

    The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.

  17. A simplified approach to compute distribution matrices for the mapping method

    NARCIS (Netherlands)

    Singh, M.K.; Galaktionov, O.S.; Meijer, H.E.H.; Anderson, P.D.

    2009-01-01

    The mapping method has proven its efficiency as an analysis and optimization tool for mixing in many different flow devices. In this paper, we present a new approach to compute the coefficients of the distribution matrix, which is, both in terms of computational speed and complexity, more easy to

  18. Direct methods for Poisson problems in low-level computer vision

    Science.gov (United States)

    Chhabra, Atul K.; Grogan, Timothy A.

    1990-09-01

    Several problems in low-level computer vision can be mathematically formulated as linear elliptic partial differential equations of the second order. A subset of these problems can be expressed in the form of a Poisson equation, Lu(x, y) = f(x, y). In this paper, fast direct methods for solving the Poisson equations of computer vision are developed. Until recently, iterative methods were used to solve these equations. Recently, direct Fourier techniques were suggested to speed up the computation. We present the Fourier Analysis and Cyclic Reduction (FACR) method which is faster than the Fourier method or the Cyclic Reduction method alone. For computation on an n x n grid, the operation count for the Fourier method is O(n2log2n), and that for the FACR method is O(n2log2log2n). The FACR method first reduces the system of equations into a smaller set using Cyclic Reduction. Next, the reduced system is solved by the Fourier method. The final solution is obtained by back-substituting the solution of the reduced system. With Neumann boundary conditions, a Poisson equation does not have a unique solution. We show how a physically meaningful solution can be obtained under such circumstances. Application of the FACR and other methods is discussed for two problems of low-level computer vision - lightness, or reflectance from brightness, and recovering height from surface gradient.

  19. Computational methods for microfluidic microscopy and phase-space imaging

    Science.gov (United States)

    Pegard, Nicolas Christian Richard

    Modern optical devices are made by assembling separate components such as lenses, objectives, and cameras. Traditionally, each part is optimized separately, even though the trade-offs typically limit the performance of the system overall. This component-based approach is particularly unfit to solve the new challenges brought by modern biology: 3D imaging, in vivo environments, and high sample throughput. In the first part of this thesis, we introduce a general method to design integrated optical systems. The laws of wave propagation, the performance of available technology, as well as other design parameters are combined as constraints into a single optimization problem. The solution provides qualitative design rules to improve optical systems as well as quantitative task-specific methods to minimize loss of information. Our results have applications in optical data storage, holography, and microscopy. The second part of this dissertation presents a direct application. We propose a more efficient design for wide-field microscopy with coherent light, based on double transmission through the sample. Historically, speckle noise and aberrations caused by undesired interferences have made coherent illumination unpopular for imaging. We were able to dramatically reduce speckle noise and unwanted interferences using optimized holographic wavefront reconstruction. The resulting microscope not only yields clear coherent images with low aberration---even in thick samples---but also increases contrast and enables optical filtering and in-depth sectioning. In the third part, we develop new imaging techniques that better respond to the needs of modern biology research through implementing optical design optimization. Using a 4D phase-space distribution, we first represent the state and propagation of incoherent light. We then introduce an additional degree of freedom by putting samples in motion in a microfluidic channel, increasing image diversity. From there, we develop a

  20. Long Term Solar Radiation Forecast Using Computational Intelligence Methods

    Directory of Open Access Journals (Sweden)

    João Paulo Coelho

    2014-01-01

    Full Text Available The point prediction quality is closely related to the model that explains the dynamic of the observed process. Sometimes the model can be obtained by simple algebraic equations but, in the majority of the physical systems, the relevant reality is too hard to model with simple ordinary differential or difference equations. This is the case of systems with nonlinear or nonstationary behaviour which require more complex models. The discrete time-series problem, obtained by sampling the solar radiation, can be framed in this type of situation. By observing the collected data it is possible to distinguish multiple regimes. Additionally, due to atmospheric disturbances such as clouds, the temporal structure between samples is complex and is best described by nonlinear models. This paper reports the solar radiation prediction by using hybrid model that combines support vector regression paradigm and Markov chains. The hybrid model performance is compared with the one obtained by using other methods like autoregressive (AR filters, Markov AR models, and artificial neural networks. The results obtained suggests an increasing prediction performance of the hybrid model regarding both the prediction error and dynamic behaviour.

  1. Umysł: system sprzeczny, ale nie trywialny

    Directory of Open Access Journals (Sweden)

    Mateusz Hohol

    2010-12-01

    Full Text Available In this article, the model of an inconsistent mind according to suggestions of Hilary Putnam and Alan Turing is presented from the perspective of the cognitive sciences and the evolutionary psychology. An attempt to reconcile the two versions of the modular model of mind by Jerry Fodor and Steven Pinker is undertaken followed by the discussion of the problem of evolutionary origin of mind. Next, the problem of the central module (interface is considered which is supposed to integrate the individual and specialized modules of mind. The main thesis of this article states that the ‘global’ inconsistency of mind may result from the inconsistencies among ‘local’ computational modules of mind. Mind may be modeled as an inconsistent formal system which remains non-trivial. Consequently, it seems rational to postulate that the operation of mind is not based on the classical Aristotelian logic and is better described the systems of a paraconsistent logic. Best examples of such logical systems include the discussive logic by Stanisław Jaśkowski, the logic of formal inconsistency (LFI by Newton da Costa and the many-valued logic by Jan Łukasiewicz and Graham Priest.

  2. Analytical and numerical methods for computing electron partial intensities in the case of multilayer systems

    International Nuclear Information System (INIS)

    Afanas’ev, Victor P.; Efremenko, Dmitry S.; Kaplya, Pavel S.

    2016-01-01

    Highlights: • The OKG-model is extended to finite thickness layers. • An efficient matrix technique for computing partial intensities is proposed. • Good agreement is obtained for computed partial intensities and experimental data. - Abstract: We present two novel methods for computing energy spectra and angular distributions of electrons emitted from multi-layer solids. They are based on the Ambartsumian–Chandrasekhar (AC) equations obtained by using the invariant imbedding method. The first method is analytical and relies on a linearization of AC equations and the use of the small-angle approximation. The corresponding solution is in good agreement with that computed by using the Oswald–Kasper–Gaukler (OKG) model, which is extended to the case of layers of finite thickness. The second method is based on the discrete ordinate formalism and relies on a transformation of the AC equations to the algebraic Ricatti and Lyapunov equations, which are solved by using the backward differential formula. Unlike the previous approach, this method can handle both linear and nonlinear equations. We analyze the applicability of the proposed methods to practical problems of computing REELS spectra. To demonstrate the efficiency of the proposed methods, several computational examples are considered. Obtained numerical and analytical solutions show good agreement with the experimental data and Monte-Carlo simulations. In addition, the impact of nonlinear terms in the Ambartsumian–Chandrasekhar equations is analyzed.

  3. QT Dispersion in Healthy Adult Nigerians | Ale | Nigerian Quarterly ...

    African Journals Online (AJOL)

    Methods: One hundred healthy Nigerian adults were studied. Healthy status of the subjects was determined by history and physical examination. A resting 12- lead ECG was obtained from all subjects for determination of QTc, QTd and ECG left ventricular hypertrophy (LVH) using Sokolow Lyon (SL) and Araoye's codes.

  4. The parameterization method for invariant manifolds from rigorous results to effective computations

    CERN Document Server

    Haro, Àlex; Figueras, Jordi-Lluis; Luque, Alejandro; Mondelo, Josep Maria

    2016-01-01

    This monograph presents some theoretical and computational aspects of the parameterization method for invariant manifolds, focusing on the following contexts: invariant manifolds associated with fixed points, invariant tori in quasi-periodically forced systems, invariant tori in Hamiltonian systems and normally hyperbolic invariant manifolds. This book provides algorithms of computation and some practical details of their implementation. The methodology is illustrated with 12 detailed examples, many of them well known in the literature of numerical computation in dynamical systems. A public version of the software used for some of the examples is available online. The book is aimed at mathematicians, scientists and engineers interested in the theory and applications of computational dynamical systems.

  5. The Relationship between Language Functions and Character Types in "Noon- Valghalam" by Jalal-Ale-Ahmad

    Directory of Open Access Journals (Sweden)

    Dr. S. A. Parsa

    Full Text Available Making harmony among language functions of story characters with their character types, is one of the characteristics and advantages of modern and successful story writing. In traditional storied literature in Iran (prose and verse, this point is not considered important and story characters, generally, tell in narrator or writers way of speaking and since there is the narrators statement, they are not the representativeness of their class and character type. Not paying attention to this subject, causes disorder in either making supposition of reality or personifying, which are both important principals of story telling. This study, identifies the story of " Noon Val Ghalam" of Jalal- Ale- Ahmad who is a contemporary writer aspect. The methodology is qualitative, and data collection is based on content–analysis and document- analysis. As Ale- Ahmad was one of the Iranian contemporary writers and was familiar with western and Iranian writers, he was expected that the language and way of describing story characters he made, be based on the social class. But this study, by stating different proofs, shows that, this writer ignores the relationship necessary for language functions and character type among characters in the story and because of the imposition of his knowledge, statement and political and social view, the independence of the protagonists in his story is not well-concidered. The inflection of political and social thoughts of each writer among his works, is not a shortfall by it self, but representing of speeches in protagonists, in the way which is not in harmony with their characters, lowering the importance of then is based or an instrument for specific social and political representatives. This action not only shows the character. The specific characters, but also disorders the processing of one important issue in story conversation. Since in each language people from different social groups, use almost the same vocabularies that

  6. Application of Computer-Assisted Learning Methods in the Teaching of Chemical Spectroscopy.

    Science.gov (United States)

    Ayscough, P. B.; And Others

    1979-01-01

    Discusses the application of computer-assisted learning methods to the interpretation of infrared, nuclear magnetic resonance, and mass spectra; and outlines extensions into the area of integrated spectroscopy. (Author/CMV)

  7. Assessment of medical communication skills by computer: assessment method and student experiences

    NARCIS (Netherlands)

    Hulsman, R. L.; Mollema, E. D.; Hoos, A. M.; de Haes, J. C. J. M.; Donnison-Speijer, J. D.

    2004-01-01

    BACKGROUND A computer-assisted assessment (CAA) program for communication skills designated ACT was developed using the objective structured video examination (OSVE) format. This method features assessment of cognitive scripts underlying communication behaviour, a broad range of communication

  8. A computer-supported method to reveal and assess Personal Professional Theories in vocational education

    NARCIS (Netherlands)

    van den Bogaart, Antoine C.M.; Bilderbeek, Richardus; Schaap, Harmen; Hummel, Hans G.K.; Kirschner, Paul A.

    2016-01-01

    This article introduces a dedicated, computer-supported method to construct and formatively assess open, annotated concept maps of Personal Professional Theories (PPTs). These theories are internalised, personal bodies of formal and practical knowledge, values, norms and convictions that

  9. The computer algebra approach of the finite difference methods for PDEs

    International Nuclear Information System (INIS)

    Liu Ruxun.

    1990-01-01

    In this paper, a first attempt has been made to realize the computer algebra construction of the finite difference methods or the finite difference schemes for constant coefficient partial differential equations. (author). 9 refs, 2 tabs

  10. Computer-Based Job and Occupational Data Collection Methods: Feasibility Study

    National Research Council Canada - National Science Library

    Mitchell, Judith I

    1998-01-01

    .... The feasibility study was conducted to assess the operational and logistical problems involved with the development, implementation, and evaluation of computer-based job and occupational data collection methods...

  11. Choosing Learning Methods Suitable for Teaching and Learning in Computer Science

    Science.gov (United States)

    Taylor, Estelle; Breed, Marnus; Hauman, Ilette; Homann, Armando

    2013-01-01

    Our aim is to determine which teaching methods students in Computer Science and Information Systems prefer. There are in total 5 different paradigms (behaviorism, cognitivism, constructivism, design-based and humanism) with 32 models between them. Each model is unique and states different learning methods. Recommendations are made on methods that…

  12. ALE: Additive Latent Effect Models for Grade Prediction

    OpenAIRE

    Ren, Zhiyun; Ning, Xia; Rangwala, Huzefa

    2018-01-01

    The past decade has seen a growth in the development and deployment of educational technologies for assisting college-going students in choosing majors, selecting courses and acquiring feedback based on past academic performance. Grade prediction methods seek to estimate a grade that a student may achieve in a course that she may take in the future (e.g., next term). Accurate and timely prediction of students' academic grades is important for developing effective degree planners and early war...

  13. Thermoelectricity analogy method for computing the periodic heat transfer in external building envelopes

    International Nuclear Information System (INIS)

    Peng Changhai; Wu Zhishen

    2008-01-01

    Simple and effective computation methods are needed to calculate energy efficiency in buildings for building thermal comfort and HVAC system simulations. This paper, which is based upon the theory of thermoelectricity analogy, develops a new harmonic method, the thermoelectricity analogy method (TEAM), to compute the periodic heat transfer in external building envelopes (EBE). It presents, in detail, the principles and specific techniques of TEAM to calculate both the decay rates and time lags of EBE. First, a set of linear equations is established using the theory of thermoelectricity analogy. Second, the temperature of each node is calculated by solving the linear equations set. Finally, decay rates and time lags are found by solving simple mathematical expressions. Comparisons show that this method is highly accurate and efficient. Moreover, relative to the existing harmonic methods, which are based on the classical control theory and the method of separation of variables, TEAM does not require complicated derivation and is amenable to hand computation and programming

  14. GRAPH-BASED POST INCIDENT INTERNAL AUDIT METHOD OF COMPUTER EQUIPMENT

    Directory of Open Access Journals (Sweden)

    I. S. Pantiukhin

    2016-05-01

    Full Text Available Graph-based post incident internal audit method of computer equipment is proposed. The essence of the proposed solution consists in the establishing of relationships among hard disk damps (image, RAM and network. This method is intended for description of information security incident properties during the internal post incident audit of computer equipment. Hard disk damps receiving and formation process takes place at the first step. It is followed by separation of these damps into the set of components. The set of components includes a large set of attributes that forms the basis for the formation of the graph. Separated data is recorded into the non-relational database management system (NoSQL that is adapted for graph storage, fast access and processing. Damps linking application method is applied at the final step. The presented method gives the possibility to human expert in information security or computer forensics for more precise, informative internal audit of computer equipment. The proposed method allows reducing the time spent on internal audit of computer equipment, increasing accuracy and informativeness of such audit. The method has a development potential and can be applied along with the other components in the tasks of users’ identification and computer forensics.

  15. Computational Methods for Protein Structure Prediction and Modeling Volume 1: Basic Characterization

    CERN Document Server

    Xu, Ying; Liang, Jie

    2007-01-01

    Volume one of this two volume sequence focuses on the basic characterization of known protein structures as well as structure prediction from protein sequence information. The 11 chapters provide an overview of the field, covering key topics in modeling, force fields, classification, computational methods, and struture prediction. Each chapter is a self contained review designed to cover (1) definition of the problem and an historical perspective, (2) mathematical or computational formulation of the problem, (3) computational methods and algorithms, (4) performance results, (5) existing software packages, and (6) strengths, pitfalls, challenges, and future research directions.

  16. Development of computational methods of design by analysis for pressure vessel components

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin

    2005-01-01

    Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)

  17. SU-F-I-43: A Software-Based Statistical Method to Compute Low Contrast Detectability in Computed Tomography Images

    Energy Technology Data Exchange (ETDEWEB)

    Chacko, M; Aldoohan, S [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States)

    2016-06-15

    Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended under simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical CT

  18. Measurement Methods for Humeral Retroversion Using Two-Dimensional Computed Tomography Scans: Which Is Most Concordant with the Standard Method?

    Science.gov (United States)

    Oh, Joo Han; Kim, Woo; Cayetano, Angel A

    2017-06-01

    Humeral retroversion is variable among individuals, and there are several measurement methods. This study was conducted to compare the concordance and reliability between the standard method and 5 other measurement methods on two-dimensional (2D) computed tomography (CT) scans. CT scans from 21 patients who underwent shoulder arthroplasty (19 women and 2 men; mean age, 70.1 years [range, 42 to 81 years]) were analyzed. The elbow transepicondylar axis was used as a distal reference. Proximal reference points included the central humeral head axis (standard method), the axis of the humeral center to 9 mm posterior to the posterior margin of the bicipital groove (method 1), the central axis of the bicipital groove -30° (method 2), the base axis of the triangular shaped metaphysis +2.5° (method 3), the distal humeral head central axis +2.4° (method 4), and contralateral humeral head retroversion (method 5). Measurements were conducted independently by two orthopedic surgeons. The mean humeral retroversion was 31.42° ± 12.10° using the standard method, and 29.70° ± 11.66° (method 1), 30.64° ± 11.24° (method 2), 30.41° ± 11.17° (method 3), 32.14° ± 11.70° (method 4), and 34.15° ± 11.47° (method 5) for the other methods. Interobserver reliability and intraobserver reliability exceeded 0.75 for all methods. On the test to evaluate the equality of the standard method to the other methods, the intraclass correlation coefficients (ICCs) of method 2 and method 4 were different from the ICC of the standard method in surgeon A ( p method 2 and method 3 were different form the ICC of the standard method in surgeon B ( p method 1) would be most concordant with the standard method even though all 5 methods showed excellent agreements.

  19. Efficient method for computing the electronic transport properties of a multiterminal system

    Science.gov (United States)

    Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio

    2018-04-01

    We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.

  20. A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm

    Directory of Open Access Journals (Sweden)

    Mariana-Eugenia Ilas

    2018-03-01

    Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.

  1. Modeling The Shock Initiation of PBX-9501 in ALE3D

    Energy Technology Data Exchange (ETDEWEB)

    Leininger, L; Springer, H K; Mace, J; Mas, E

    2008-07-01

    The SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has determined the 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate the code predictions. The SMIS tests use a powder gun to shoot scaled NATO standard fragments at a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. The SMIS real-world shot scenario creates a unique test-bed because many of the fragments arrive at the impact plate off-center and at an angle of impact. The goal of this model validation experiments is to demonstrate the predictive capability of the Tarver-Lee Ignition and Growth (I&G) reactive flow model [2] in this fully 3-dimensional regime of Shock to Detonation Transition (SDT). The 3-dimensional Arbitrary Lagrange Eulerian hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations accurately reproduce the 'Go/No-Go' threshold of the Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied in a predictive fashion for the response of heterogeneous high explosives in the SDT regime.

  2. Methods and experimental coefficients used in the computation of reactor shielding

    International Nuclear Information System (INIS)

    Bourgeois, J.; Lafore, P.; Millot, J.P.; Rastoin, J.; Vathaire, F. de

    1958-01-01

    1. The concept of an effective removal cross section has been developed in order more easily to compute reactor shielding thicknesses. We have built an experimental facility for the purpose of measuring effective removal cross sections, the value of which had not been published at that time. The first part of this paper describes the device or facility used, the computation method applied, and the results obtained. 2. Starting from this concept, we endeavored to define a removal cross section as a function of energy. This enabled us to use the method for computations bearing on the attenuation of fast neutrons of any spectrum. An experimental verification was carried out for the case of fission neutrons filtered by a substantial thickness of graphite. 3. Finally, we outline a computation method enabling us to determine the sources of captured gamma rays by the age theory and we give an example of the application in a composite shield. (author) [fr

  3. Analysis of Protein by Spectrophotometric and Computer Colour Based Intensity Method from Stem of Pea (Pisum sativum at Different Stages

    Directory of Open Access Journals (Sweden)

    Afsheen Mushtaque Shah

    2010-12-01

    Full Text Available In this study proteins were analyzed from pea plants at three different growth stages of stem by spectrophotometric i.e Lowry and Bradford quantitative methods and computer colour intensity based method. Though Spectrophotometric methods are regarded as classical methods, we report an alternate computer based method which gave comparable results. Computer software was developed the for protein analysis which is easier, time and money saving method as compared to the classical methods.

  4. A Review of Computational Methods in Materials Science: Examples from Shock-Wave and Polymer Physics

    Science.gov (United States)

    Steinhauser, Martin O.; Hiermaier, Stefan

    2009-01-01

    This review discusses several computational methods used on different length and time scales for the simulation of material behavior. First, the importance of physical modeling and its relation to computer simulation on multiscales is discussed. Then, computational methods used on different scales are shortly reviewed, before we focus on the molecular dynamics (MD) method. Here we survey in a tutorial-like fashion some key issues including several MD optimization techniques. Thereafter, computational examples for the capabilities of numerical simulations in materials research are discussed. We focus on recent results of shock wave simulations of a solid which are based on two different modeling approaches and we discuss their respective assets and drawbacks with a view to their application on multiscales. Then, the prospects of computer simulations on the molecular length scale using coarse-grained MD methods are covered by means of examples pertaining to complex topological polymer structures including star-polymers, biomacromolecules such as polyelectrolytes and polymers with intrinsic stiffness. This review ends by highlighting new emerging interdisciplinary applications of computational methods in the field of medical engineering where the application of concepts of polymer physics and of shock waves to biological systems holds a lot of promise for improving medical applications such as extracorporeal shock wave lithotripsy or tumor treatment. PMID:20054467

  5. An Accurate Method for Computing the Absorption of Solar Radiation by Water Vapor

    Science.gov (United States)

    Chou, M. D.

    1980-01-01

    The method is based upon molecular line parameters and makes use of a far wing scaling approximation and k distribution approach previously applied to the computation of the infrared cooling rate due to water vapor. Taking into account the wave number dependence of the incident solar flux, the solar heating rate is computed for the entire water vapor spectrum and for individual absorption bands. The accuracy of the method is tested against line by line calculations. The method introduces a maximum error of 0.06 C/day. The method has the additional advantage over previous methods in that it can be applied to any portion of the spectral region containing the water vapor bands. The integrated absorptances and line intensities computed from the molecular line parameters were compared with laboratory measurements. The comparison reveals that, among the three different sources, absorptance is the largest for the laboratory measurements.

  6. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    International Nuclear Information System (INIS)

    Hadj Salah, S.; Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-01

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude

  7. D-Branes on ALE Spaces and the ADE Classification of Conformal Field Theories

    CERN Document Server

    Lerche, Wolfgang; Schweigert, C

    2002-01-01

    The spectrum of D2-branes wrapped on an ALE space of general ADE type is determined, by representing them as boundary states of N=2 superconformal minimal models. The stable quantum states have RR charges which precisely represent the gauge fields of the corresponding Lie algebra. This provides a simple and direct physical link between the ADE classification of N=2 superconformal field theories, and the corresponding root systems. An affine extension of this structure is also considered, whose boundary states represent the D2-branes plus additional D0-branes.

  8. Aportaciones a la identificación de señales impulsivas generadas por impactos

    OpenAIRE

    Molino Minero, Erik

    2010-01-01

    En este trabajo tesis se estudia el procesado de señales impulsivas generadas por impactos entre cuerpos rígidos. Uno de los problemas que se encuentran al trabajar con impactos es que su análisis generalmente se ve limitado a mediciones indirectas: debido a que las colisiones no se desarrollan directamente sobre el sensor, o bien, porque no es posible instrumentalizar el objeto de colisiona. Esto ocasiona que entre el sensor y el punto de impacto exista un medio de propagación que distorsion...

  9. SEÑALES DE CALCIO NUCLEAR INDUCIDAS POR IGF-1 EN CARDIOMIOCITOS: CARACTERIZACION Y MECANISMO FUNCINAL

    OpenAIRE

    IBARRA IRIBARREN; CRISTIAN ANDRES; IBARRA IRIBARREN; CRISTIAN ANDRES

    2010-01-01

    IGF-1 es un importante estímulo pro hipertrófico y antiapoptótico en los cardiomiocitos. Diversas vías de transducción de señales son activadas por IGF-1 en los cardiomiocitos y están involucradas sus efectos fisiopatológicos. En nuestro laboratorio hemos estudiado la participación del calcio intracelular en el sistema transduccional del IGF-1 . Encontramos que IGF-1 incrementa los niveles intracelulares de calcio de modo transitorio y con una cinética rápida a nivel de los núcleos celulares....

  10. SEÑALES DE CALCIO NUCLEAR INDUCIDAS POR IGF-1 EN CARDIOMIOCITOS: CARACTERIZACION Y MECANISMO FUNCIONAL

    OpenAIRE

    IBARRA IRIBARREN, CRISTIAN ANDRES

    2010-01-01

    IGF-1 es un importante estímulo pro hipertrófico y antiapoptótico en los cardiomiocitos. Diversas vías de transducción de señales son activadas por IGF-1 en los cardiomiocitos y están involucradas sus efectos fisiopatológicos. En nuestro laboratorio hemos estudiado la participación del calcio intracelular en el sistema transduccional del IGF-1. Encontramos que IGF-1 incrementa los niveles intracelulares de calcio de modo transitorio y con una cinética rápida a nivel de los núcleos celulare...

  11. Proyecto de un centro de cerveza ale artesanal de trigo en Cuéllar (Segovia)

    OpenAIRE

    García Sanz, Simón

    2016-01-01

    En el presente trabajo fin de grado se redacta el proyecto para la creación de un centro de elaboración de cerveza ale de trigo artesanal en la localidad de Cuéllar (Segovia). Para todo ello se emplean los 5 documentos oficiales en los que se muestra todas las observaciones, instalaciones, estudios, planos y demás documentación para la correcta ejecución del mismo. Grado en Ingeniería de las Industrias Agrarias y Alimentarias

  12. Forced folding in a salty basin: Gada'-Ale in the Afar

    Science.gov (United States)

    Rafflin, Victoria; Hetherington, Rachel; Hagos, Miruts; van Wyk de Vries, Benjamin

    2017-04-01

    The Gada'-Ale Volcano in the Danakil Depression of Ethiopia is a curious shield-like, or flat dome-like volcanic centre in the Afar Rift. It has several fissure eruptions seen on its mid and lower flanks. It has an even more curious ring structure on its western side that has been interpreted as a salt diapir. The complex lies the central part of the basin where there are 1-2 km thick salt deposits. The area was active in 1990's (Amelung et al 2000) with no eruptive activity, but a possible intrusion. There was also an intrusion north of Gada'-Ale at Dallol in 2005 (Nobile et al 2012). Using Google Earth imagery, we have mapped the volcano, and note that: a) the main edifice has a thin skin of lava lying light coloured rock; b) that these thin deposits are sliding down the flank of volcano, and thrusting at the base. In doing so, they are breaking into detached plates. The light colour of the deposits, and the ability of the rock to slide on them suggest that are salt; Fractures on and around the volcano form curved patterns, around raised areas with several km diameter. These could be surface expressions of shallow sills. Putting the observations together with the known geology of adjacent centres like Dallol and Alu, we suggest that Gada'-Ale is a forced fold, created over a sill that has either bulged into a laccolith, or risen as a saucer-shaped sill. The upraised salt has caused the thin veneer of volcanics to slide off. That there are eruptive fissures on Gada'-Ale, and possible sill intrusions around the base suggests that the centre lies over a complex of sills that have gradually intruded and bulged the structure to its present level. Eruptions have contribute only a small amount to the whole topography of the edifice. We hope to visit the volcano in March and will being hot-off-the press details back to the EGU!

  13. A Lanczos eigenvalue method on a parallel computer. [for large complex space structure free vibration analysis

    Science.gov (United States)

    Bostic, Susan W.; Fulton, Robert E.

    1987-01-01

    Eigenvalue analyses of complex structures is a computationally intensive task which can benefit significantly from new and impending parallel computers. This study reports on a parallel computer implementation of the Lanczos method for free vibration analysis. The approach used here subdivides the major Lanczos calculation tasks into subtasks and introduces parallelism down to the subtask levels such as matrix decomposition and forward/backward substitution. The method was implemented on a commercial parallel computer and results were obtained for a long flexible space structure. While parallel computing efficiency is problem and computer dependent, the efficiency for the Lanczos method was good for a moderate number of processors for the test problem. The greatest reduction in time was realized for the decomposition of the stiffness matrix, a calculation which took 70 percent of the time in the sequential program and which took 25 percent of the time on eight processors. For a sample calculation of the twenty lowest frequencies of a 486 degree of freedom problem, the total sequential computing time was reduced by almost a factor of ten using 16 processors.

  14. Comparison of Three Different Parallel Computation Methods for a Two-Dimensional Dam-Break Model

    Directory of Open Access Journals (Sweden)

    Shanghong Zhang

    2017-01-01

    Full Text Available Three parallel methods (OpenMP, MPI, and OpenACC are evaluated for the computation of a two-dimensional dam-break model using the explicit finite volume method. A dam-break event in the Pangtoupao flood storage area in China is selected as a case study to demonstrate the key technologies for implementing parallel computation. The subsequent acceleration of the methods is also evaluated. The simulation results show that the OpenMP and MPI parallel methods achieve a speedup factor of 9.8× and 5.1×, respectively, on a 32-core computer, whereas the OpenACC parallel method achieves a speedup factor of 20.7× on NVIDIA Tesla K20c graphics card. The results show that if the memory required by the dam-break simulation does not exceed the memory capacity of a single computer, the OpenMP parallel method is a good choice. Moreover, if GPU acceleration is used, the acceleration of the OpenACC parallel method is the best. Finally, the MPI parallel method is suitable for a model that requires little data exchange and large-scale calculation. This study compares the efficiency and methodology of accelerating algorithms for a dam-break model and can also be used as a reference for selecting the best acceleration method for a similar hydrodynamic model.

  15. Análisis de señales de acelerometría en Biomecánica

    OpenAIRE

    Camacho García, Andrés; Llinares Llopis, Raúl; Miro Borras, Julio; Bernabeu Soler, Pablo Andrés

    2015-01-01

    El análisis de señales de acelerometría permite detectar movimientos que puedan resultar lesivos en la realización de una actividad física. Para determinar estos movimientos, es necesario analizar parámetros de las señales de acelerometría obtenidos a partir de varios puntos de interés. La localización de estos puntos se convierte en una tarea tediosa cuando se realiza de forma manual por un experto. Este trabajo presenta un ejemplo de aplicación de técnicas de procesado de señal a señales bi...

  16. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    Science.gov (United States)

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  17. ÎNCEPUTURILE FILOSOFICE ALE LUI LUCIAN BLAGA

    Directory of Open Access Journals (Sweden)

    Svetlana COANDĂ

    2016-12-01

    Full Text Available În articol sunt evidenţiate etapele evoluţiei spirituale a lui Lucian Blaga, este demonstrată importanţa primei perioade de activitate a gânditorului român, perioada anilor 1914-1919, care constituie începutul activităţii filosofice a lui L.Blaga, acesta fiind deosebit de semnificativ pentru înţelegerea genezei ideilor sale, a influenţelor pe care le-a suportat, a continuităţii tematicii şi reflecţiilor filosofice, care au culminat cu elaborarea unui profund sistem filosofic. Sunt analizate în detaliu ideile filosofice centrale, temele de meditaţie din lucrările scrise şi publicate de L.Blaga între anii 1914 şi 1919: analiza şi aprecierea ideilor filosofului francez Henri Bergson, coraportul dintre filosofie şi ştiinţă, aportul specific al filosofiei la formarea concepţiei despre lume a personalităţii, rolul deosebit de important al metodelor de cercetare în asigurarea obţinerii adevărului în procesul de cunoaştere, unitatea formelor culturii etc.THE PHILOSOFICAL BEGINNINGS OF LUCIAN BLAGAThis article highlights the spiritual evolution steps of Lucian Blaga and demonstrates the importance of the first period of activity of the Romanian thinker - the period between 1914 and 1919. This time frame constitutes the beginning of his philosophical activity and it is especially significant to understand the genesis of his ideas, of his influences, the continuity of the philosophical themes and reflections that culminated with the elaboration of a profound philosophical system. Analysis is done over the central philosophical ideas, the meditation themes from the works written and published by Lucian Blaga between 1914 and 1919: the analysis and appreciation of the ideas of the French philosopher Henri Bergson, the correlation between philosophy and science, the specific input of philosophy in the formation of the world conception on personality, the extremely important role of the research methods in ensuring the

  18. A computer vision based method for 3D posture estimation of symmetrical lifting.

    Science.gov (United States)

    Mehrizi, Rahil; Peng, Xi; Xu, Xu; Zhang, Shaoting; Metaxas, Dimitris; Li, Kang

    2018-03-01

    Work-related musculoskeletal disorders (WMSD) are commonly observed among the workers involved in material handling tasks such as lifting. To improve work place safety, it is necessary to assess musculoskeletal and biomechanical risk exposures associated with these tasks. Such an assessment has been mainly conducted using surface marker-based methods, which is time consuming and tedious. During the past decade, computer vision based pose estimation techniques have gained an increasing interest and may be a viable alternative for surface marker-based human movement analysis. The aim of this study is to develop and validate a computer vision based marker-less motion capture method to assess 3D joint kinematics of lifting tasks. Twelve subjects performing three types of symmetrical lifting tasks were filmed from two views using optical cameras. The joints kinematics were calculated by the proposed computer vision based motion capture method as well as a surface marker-based motion capture method. The joint kinematics estimated from the computer vision based method were practically comparable to the joint kinematics obtained by the surface marker-based method. The mean and standard deviation of the difference between the joint angles estimated by the computer vision based method and these obtained by the surface marker-based method was 2.31 ± 4.00°. One potential application of the proposed computer vision based marker-less method is to noninvasively assess 3D joint kinematics of industrial tasks such as lifting. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. EMRlog Method for Computer Security for Electronic Medical Records with Logic and Data Mining

    Directory of Open Access Journals (Sweden)

    Sergio Mauricio Martínez Monterrubio

    2015-01-01

    Full Text Available The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system.

  20. Methods and apparatus using commutative error detection values for fault isolation in multiple node computers

    Science.gov (United States)

    Almasi, Gheorghe [Ardsley, NY; Blumrich, Matthias Augustin [Ridgefield, CT; Chen, Dong [Croton-On-Hudson, NY; Coteus, Paul [Yorktown, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk I [Ossining, NY; Singh, Sarabjeet [Mississauga, CA; Steinmacher-Burow, Burkhard D [Wernau, DE; Takken, Todd [Brewster, NY; Vranas, Pavlos [Bedford Hills, NY

    2008-06-03

    Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored in memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.

  1. A systematic and efficient method to compute multi-loop master integrals

    Science.gov (United States)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  2. An efficient method for computing the absorption of solar radiation by water vapor

    Science.gov (United States)

    Chou, M.-D.; Arking, A.

    1981-01-01

    Chou and Arking (1980) have developed a fast but accurate method for computing the IR cooling rate due to water vapor. Using a similar approach, the considered investigation develops a method for computing the heating rates due to the absorption of solar radiation by water vapor in the wavelength range from 4 to 8.3 micrometers. The validity of the method is verified by comparison with line-by-line calculations. An outline is provided of an efficient method for transmittance and flux computations based upon actual line parameters. High speed is achieved by employing a one-parameter scaling approximation to convert an inhomogeneous path into an equivalent homogeneous path at suitably chosen reference conditions.

  3. X-ray scatter correction method for dedicated breast computed tomography: improvements and initial patient testing

    NARCIS (Netherlands)

    Ramamurthy, S.; D'Orsi, C.J.; Sechopoulos, I.

    2016-01-01

    A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360 degrees with a

  4. Computer based methods for measurement of joint space width: update of an ongoing OMERACT project

    NARCIS (Netherlands)

    Sharp, John T.; Angwin, Jane; Boers, Maarten; Duryea, Jeff; von Ingersleben, Gabriele; Hall, James R.; Kauffman, Joost A.; Landewé, Robert; Langs, Georg; Lukas, Cédric; Maillefert, Jean-Francis; Bernelot Moens, Hein J.; Peloschek, Philipp; Strand, Vibeke; van der Heijde, Désirée

    2007-01-01

    Computer-based methods of measuring joint space width (JSW) could potentially have advantages over scoring joint space narrowing, with regard to increased standardization, sensitivity, and reproducibility. In an early exercise, 4 different methods showed good agreement on measured change in JSW over

  5. Methods, systems, and computer program products for network firewall policy optimization

    Science.gov (United States)

    Fulp, Errin W [Winston-Salem, NC; Tarsa, Stephen J [Duxbury, MA

    2011-10-18

    Methods, systems, and computer program products for firewall policy optimization are disclosed. According to one method, a firewall policy including an ordered list of firewall rules is defined. For each rule, a probability indicating a likelihood of receiving a packet matching the rule is determined. The rules are sorted in order of non-increasing probability in a manner that preserves the firewall policy.

  6. Phenomenography and Grounded Theory as Research Methods in Computing Education Research Field

    Science.gov (United States)

    Kinnunen, Paivi; Simon, Beth

    2012-01-01

    This paper discusses two qualitative research methods, phenomenography and grounded theory. We introduce both methods' data collection and analysis processes and the type or results you may get at the end by using examples from computing education research. We highlight some of the similarities and differences between the aim, data collection and…

  7. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  8. System and method for controlling power consumption in a computer system based on user satisfaction

    Science.gov (United States)

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  9. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  10. The adaptation method in the Monte Carlo simulation for computed tomography

    Directory of Open Access Journals (Sweden)

    Hyounggun Lee

    2015-06-01

    Full Text Available The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT. To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA and a human-like voxel phantom (KTMAN-2 (Los Alamos National Laboratory, Los Alamos, NM, USA. For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations—assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  11. Computational Quantum Mechanics for Materials Engineers The EMTO Method and Applications

    CERN Document Server

    Vitos, L

    2007-01-01

    Traditionally, new materials have been developed by empirically correlating their chemical composition, and the manufacturing processes used to form them, with their properties. Until recently, metallurgists have not used quantum theory for practical purposes. However, the development of modern density functional methods means that today, computational quantum mechanics can help engineers to identify and develop novel materials. Computational Quantum Mechanics for Materials Engineers describes new approaches to the modelling of disordered alloys that combine the most efficient quantum-level th

  12. 2nd International Conference on Multiscale Computational Methods for Solids and Fluids

    CERN Document Server

    2016-01-01

    This volume contains the best papers presented at the 2nd ECCOMAS International Conference on Multiscale Computations for Solids and Fluids, held June 10-12, 2015. Topics dealt with include multiscale strategy for efficient development of scientific software for large-scale computations, coupled probability-nonlinear-mechanics problems and solution methods, and modern mathematical and computational setting for multi-phase flows and fluid-structure interaction. The papers consist of contributions by six experts who taught short courses prior to the conference, along with several selected articles from other participants dealing with complementary issues, covering both solid mechanics and applied mathematics. .

  13. Parallel scientific computing theory, algorithms, and applications of mesh based and meshless methods

    CERN Document Server

    Trobec, Roman

    2015-01-01

    This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interes

  14. Application of the Ssub(n)-method for reactors computations on BESM-6 computer by using 26-group constants in the sub-group presentation

    International Nuclear Information System (INIS)

    Rogov, A.D.

    1975-01-01

    Description of the computer program for reactor computation by application of the Ssub(n)-method in the two-dimensional XY and RZ geometries is given. These programs are used with application of the computer library of 26- group constats system taking into account the resonance structure of the cross sections in the subgroup presentation. Results of some systems computations are given and the results obtained are analysed. (author)

  15. Hypnosis and pain perception: An Activation Likelihood Estimation (ALE) meta-analysis of functional neuroimaging studies.

    Science.gov (United States)

    Del Casale, Antonio; Ferracuti, Stefano; Rapinesi, Chiara; De Rossi, Pietro; Angeletti, Gloria; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo

    2015-12-01

    Several studies reported that hypnosis can modulate pain perception and tolerance by affecting cortical and subcortical activity in brain regions involved in these processes. We conducted an Activation Likelihood Estimation (ALE) meta-analysis on functional neuroimaging studies of pain perception under hypnosis to identify brain activation-deactivation patterns occurring during hypnotic suggestions aiming at pain reduction, including hypnotic analgesic, pleasant, or depersonalization suggestions (HASs). We searched the PubMed, Embase and PsycInfo databases; we included papers published in peer-reviewed journals dealing with functional neuroimaging and hypnosis-modulated pain perception. The ALE meta-analysis encompassed data from 75 healthy volunteers reported in 8 functional neuroimaging studies. HASs during experimentally-induced pain compared to control conditions correlated with significant activations of the right anterior cingulate cortex (Brodmann's Area [BA] 32), left superior frontal gyrus (BA 6), and right insula, and deactivation of right midline nuclei of the thalamus. HASs during experimental pain impact both cortical and subcortical brain activity. The anterior cingulate, left superior frontal, and right insular cortices activation increases could induce a thalamic deactivation (top-down inhibition), which may correlate with reductions in pain intensity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Voxel-Based Morphometry ALE meta-analysis of Bipolar Disorder

    Science.gov (United States)

    Magana, Omar; Laird, Robert

    2012-03-01

    A meta-analysis was performed independently to view the changes in gray matter (GM) on patients with Bipolar disorder (BP). The meta-analysis was conducted on a Talairach Space using GingerALE to determine the voxels and their permutation. In order to achieve the data acquisition, published experiments and similar research studies were uploaded onto the online Voxel-Based Morphometry database (VBM). By doing so, coordinates of activation locations were extracted from Bipolar disorder related journals utilizing Sleuth. Once the coordinates of given experiments were selected and imported to GingerALE, a Gaussian was performed on all foci points to create the concentration points of GM on BP patients. The results included volume reductions and variations of GM between Normal Healthy controls and Patients with Bipolar disorder. A significant amount of GM clusters were obtained in Normal Healthy controls over BP patients on the right precentral gyrus, right anterior cingulate, and the left inferior frontal gyrus. In future research, more published journals could be uploaded onto the database and another VBM meta-analysis could be performed including more activation coordinates or a variation of age groups.

  17. History of the pharmacies in the town of Aleşd, Bihor county.

    Science.gov (United States)

    Paşca, Manuela Bianca; Gîtea, Daniela; Moisa, Corina

    2013-01-01

    In 1848 pharmacist Horváth Mihály established the first pharmacy in Aleşd, called Speranţa (Remény). Following the brief history of this pharmacy we will notice that in 1874 the pharmacy comes into the possession of Kocsiss József. In 1906 the personal rights of the pharmacy are transcribed to Kocsiss Béla, and since 1938 the his son, Kocsiss Dezső, pharmacist, became the new owner. In 1949 the pharmacy was nationalized and became the property of the Pharmaceutical Office Oradea, the pharmacy got the name Farmacia nr. 22 of Aleşd, and continued its activity throughout the whole communist period. Starting with the year 1991 it entered into private system as Angefarm, as the property of Mermeze Gheorghe, pharmacist, and from 2003 until now works under the name Vitalogy 3, as the property of Ghitea Sorin. A second pharmacy, Sfântul Anton was founded in 1937 by pharmacist Herceg Dobreanu Atena, which however had no continuity during the communist period.

  18. The impact of different ale brewer’s yeast strains on the proteome of immature beer

    DEFF Research Database (Denmark)

    Berner, Torben Sune; Jacobsen, Susanne; Arneborg, Nils

    2013-01-01

    BACKGROUND: It is well known that brewer’s yeast affects the taste and aroma of beer. However, the influence of brewer’s yeast on the protein composition of beer is currently unknown. In this study, changes of the proteome of immature beer, i.e. beer that has not been matured after fermentation, ...... was present in beer brewed with KVL011, while lacking in WLP001 beer.......BACKGROUND: It is well known that brewer’s yeast affects the taste and aroma of beer. However, the influence of brewer’s yeast on the protein composition of beer is currently unknown. In this study, changes of the proteome of immature beer, i.e. beer that has not been matured after fermentation......, by ale brewer’s yeast strains with different abilities to degrade fermentable sugars were investigated. RESULTS: Beers were fermented from standard hopped wort (13° Plato) using two ale brewer’s yeast (Saccharomyces cerevisiae) strains with different attenuation degrees. Both immature beers had the same...

  19. An Adaptive Laboratory Evolution Method to Accelerate Autotrophic Metabolism

    DEFF Research Database (Denmark)

    Zhang, Tian; Tremblay, Pier-Luc

    2018-01-01

    Adaptive laboratory evolution (ALE) is an approach enabling the development of novel characteristics in microbial strains via the application of a constant selection pressure. This method is also an efficient tool to acquire insights on molecular mechanisms responsible for specific phenotypes. AL...... autotrophically and reducing CO2 into acetate more efficiently. Strains developed via this ALE method were also used to gain knowledge on the autotrophic metabolism of S. ovata as well as other acetogenic bacteria....

  20. Tundish Cover Flux Thickness Measurement Method and Instrumentation Based on Computer Vision in Continuous Casting Tundish

    Directory of Open Access Journals (Sweden)

    Meng Lu

    2013-01-01

    Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.

  1. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  2. Grid computing for LHC and methods for W boson mass measurement at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christopher

    2007-12-14

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W {yields} {mu}{nu}; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  3. Grid computing for LHC and methods for W boson mass measurement at CMS

    International Nuclear Information System (INIS)

    Jung, Christopher

    2007-01-01

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W → μν; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  4. Novel computational methods to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-12-01

    Computational drug repurposing aims at finding new medical uses for existing drugs. The identification of novel drug-target interactions (DTIs) can be a useful part of such a task. Computational determination of DTIs is a convenient strategy for systematic screening of a large number of drugs in the attempt to identify new DTIs at low cost and with reasonable accuracy. This necessitates development of accurate computational methods that can help focus on the follow-up experimental validation on a smaller number of highly likely targets for a drug. Although many methods have been proposed for computational DTI prediction, they suffer the high false positive prediction rate or they do not predict the effect that drugs exert on targets in DTIs. In this report, first, we present a comprehensive review of the recent progress in the field of DTI prediction from data-centric and algorithm-centric perspectives. The aim is to provide a comprehensive review of computational methods for identifying DTIs, which could help in constructing more reliable methods. Then, we present DDR, an efficient method to predict the existence of DTIs. DDR achieves significantly more accurate results compared to the other state-of-theart methods. As supported by independent evidences, we verified as correct 22 out of the top 25 DDR DTIs predictions. This validation proves the practical utility of DDR, suggesting that DDR can be used as an efficient method to identify 5 correct DTIs. Finally, we present DDR-FE method that predicts the effect types of a drug on its target. On different representative datasets, under various test setups, and using different performance measures, we show that DDR-FE achieves extremely good performance. Using blind test data, we verified as correct 2,300 out of 3,076 DTIs effects predicted by DDR-FE. This suggests that DDR-FE can be used as an efficient method to identify correct effects of a drug on its target.

  5. Applications and comparisons of methods of computing the S Matrix of 2-ports

    International Nuclear Information System (INIS)

    Jones, R.M.; Ko, Kwok; Tantawi, S.; Kroll, N.; Yu, D.

    1993-05-01

    We report on the application of three different methods of computing the S Matrix for 2-port microwave circuits. The four methods are modal expansions with field matching across boundaries, time domain integration of Maxwell's equations as implemented in MAFIA, HFSS (high frequency structure simulator), and the KKY frequency domain method. Among the applications to be described are steps in rectangular waveguides and irises in waveguides

  6. The Fast Adaptive Composite Grid Method and Algebraic Multigrid in Large Scale Computation

    Science.gov (United States)

    1991-01-03

    that is more easily extended to other problems, and could be used in geometric multigrid methods as well. AMG was also applied to problems in fluids...processing must lead to processor ideal time. The results of this study proved this concern to be unfounded: practical use of parallel multigrid methods would...development is a precise comparative analysis of the complexity of the Schwarz and multigrid methods in serial and parallel computation. Not surprisingly

  7. Fast electromagnetic Field Computations Using Multigrid Method Based on Nested Finite Elements Meshes

    OpenAIRE

    Cingoski, Vlatko; Yamashita, Hideo

    1999-01-01

    In this paper the investigation of the efficiency multigrid method as a solution method for large system of algebraic equations that arise from ordinary finite element analysis is presented. The mathematical background for multigrid methods and some points regarding definition of restriction and prolongation matrices for multigrid finite element analysis based on nested meshes are also given. The convergence rate and computation speed od=f the V-cycle and W-cycle multigrid algorithms are disc...

  8. Repertory Grids als Methode zur Untersuchung von Schülervorstellungen im Bereich Computer und Internet

    OpenAIRE

    Pancratz, Nils

    2016-01-01

    In dieser Arbeit wird untersucht, inwieweit sich die Repertory Grid Methode zur Erhebung (bzw. zur Bestätigung bisheriger Ergebnisse von Untersuchungen) von Schülervorstellungen im Bereich Computer und Internet eignet. Dazu wird eine konkrete Repertory Grid Methode entwickelt und diese an einigen Schülerinnen und Schülern angewandt. Die Untersuchungen werden ausgewertet und mit bisherigen Ergebnissen verglichen. Abschließend wird die Eignung der Methode zur Untersuchung von Schülervorstellung...

  9. A virtual component method in numerical computation of cascades for isotope separation

    International Nuclear Information System (INIS)

    Zeng Shi; Cheng Lu

    2014-01-01

    The analysis, optimization, design and operation of cascades for isotope separation involve computations of cascades. In analytical analysis of cascades, using virtual components is a very useful analysis method. For complicated cases of cascades, numerical analysis has to be employed. However, bound up to the conventional idea that the concentration of a virtual component should be vanishingly small, virtual component is not yet applied to numerical computations. Here a method of introducing the method of using virtual components to numerical computations is elucidated, and its application to a few types of cascades is explained and tested by means of numerical experiments. The results show that the concentration of a virtual component is not restrained at all by the 'vanishingly small' idea. For the same requirements on cascades, the cascades obtained do not depend on the concentrations of virtual components. (authors)

  10. An efficient method for multiple radiative transfer computations and the lookup table generation

    International Nuclear Information System (INIS)

    Wang Menghua

    2003-01-01

    An efficient method for the multiple radiative-transfer computations is proposed. The method is based on the fact that, in the radiative-transfer computation, most of the CPU time is used in the numerical integration for the Fourier components of the scattering phase function. With the new method, the lookup tables, which are usually needed to convert the spaceborne and the airborne sensor-measured signals to the desired physical and optical quantities, can be generated efficiently. We use the ocean color remote sensor Sea-viewing Wide Field-of-view Sensor as an example to show that, with the new approach, the CPU time can be reduced significantly for the generation of the lookup tables. The new scheme is useful and effective for the multiple radiative-transfer computations

  11. Shafting Alignment Computing Method of New Multibearing Rotor System under Specific Installation Requirement

    Directory of Open Access Journals (Sweden)

    Qian Chen

    2016-01-01

    Full Text Available The shafting of large steam turbine generator set is composed of several rotors which are connected by couplings. The computing method of shafting with different structure under specific installation requirement is studied in this paper. Based on three-moment equation, shafting alignment mathematical model is established. The computing method of bearing elevations and loads under corresponding installation requirements, where bending moment of each coupling is zero and there exist preset sag and gap in some couplings, is proposed, respectively. Bearing elevations and loads of shafting with different structure under specific installation requirement are calculated; calculation results are compared with installation data measured on site which verifies the validity and accuracy of the proposed shafting alignment computing method. The above work provides a reliable approach to analyze shafting alignment and could guide installation on site.

  12. Maximum likelihood methods in biology revisited with tools of computational intelligence.

    Science.gov (United States)

    Seiffertt, John; Vanbrunt, Andrew; Wunsch, Donald C

    2008-01-01

    We investigate the problem of identification of genes correlated with the occurrence of diseases in a given population. The classical method of parametric linkage analysis is combined with newer tools and results are achieved on a model problem. This traditional method has advantages over non-parametric methods, but these advantages have been difficult to realize due to their high computational cost. We study a class of Evolutionary Algorithms from the Computational Intelligence literature which are designed to cut such costs considerably for optimization problems. We outline the details of this algorithm, called Particle Swarm Optimization, and present all the equations and parameter values we used to accomplish our optimization. We view this study as a launching point for a wider investigation into the leveraging of computational intelligence tools in the study of complex biological systems.

  13. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    International Nuclear Information System (INIS)

    Norris, Edward T.; Liu, Xin; Hsieh, Jiang

    2015-01-01

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer

  14. COMPUTING

    CERN Document Server

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  15. On computing efficiency of Monte-Carlo methods in solving Dirichlet's problem

    International Nuclear Information System (INIS)

    Androsenko, P.A.; Lomtev, V.L.

    1990-01-01

    Algorithms of Monte-Carlo method based on boundary random walks, application of Fredholm's series and intended for the solution of stationary and non-stationary boundary value Dirichlet's problem for the Laplace's equation are presented. Description is made of the code systems BRANDB, BRANDBT and BRANDF realizing the above algorithms and allowing the calculation of values of solution and its derivatives for three-dimensional geometrical systems. The results of computing experiments on solving a number of problems in the system with convex and non-convex geometries are presented, conclusions are made on the computing efficiency of the methods involved. 13 refs.; 4 figs.; 2 tabs

  16. Homogenized parameters of light water fuel elements computed by a perturbative (perturbation) method

    International Nuclear Information System (INIS)

    Koide, Maria da Conceicao Michiyo

    2000-01-01

    A new analytic formulation for material parameters homogenization of the two dimensional and two energy-groups diffusion model has been successfully used as a fast computational tool for recovering the detailed group fluxes in full reactor cores. The homogenization method which has been proposed does not require the solution of the diffusion problem by a numerical method. As it is generally recognized that currents at assembly boundaries must be computed accurately, a simple numerical procedure designed to improve the values of currents obtained by nodal calculations is also presented. (author)

  17. Slepian modeling as a computational method in random vibration analysis of hysteretic structures

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob

    1999-01-01

    white noise. The computation time for obtaining estimates of relevant statistics on a given accuracy level is decreased by factors of one ormore orders of size as compared to the computation time needed for direct elasto-plastic displacementresponse simulations by vectorial Markov sequence techniques....... Moreover the Slepian method gives valuablephysical insight about the details of the plastic displacement development by time.The paper gives a general self-contained mathematical description of the Slepian method based plasticdisplacement analysis of Gaussian white noise excited EPOs. Experiences...

  18. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

    Science.gov (United States)

    Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

    2010-05-04

    A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

  19. Analysis and development of methods of correcting for heterogeneities to cobalt-60: computing application

    International Nuclear Information System (INIS)

    Kappas, K.

    1982-11-01

    The purpose of this work is the analysis of the influence of inhomogeneities of the human body on the determination of the dose in Cobalt-60 radiation therapy. The first part is dedicated to the physical characteristics of inhomogeneities and to the conventional methods of correction. New methods of correction are proposed based on the analysis of the scatter. This analysis allows to take account, with a greater accuracy of their physical characteristics and of the corresponding modifications of the dose: ''the differential TAR method'' and ''the Beam Substraction Method''. The second part is dedicated to the computer implementation of the second method of correction for routine application in hospital [fr

  20. Optimized Runge-Kutta methods with minimal dispersion and dissipation for problems arising from computational acoustics

    International Nuclear Information System (INIS)

    Tselios, Kostas; Simos, T.E.

    2007-01-01

    In this Letter a new explicit fourth-order seven-stage Runge-Kutta method with a combination of minimal dispersion and dissipation error and maximal accuracy and stability limit along the imaginary axes, is developed. This method was produced by a general function that was constructed to satisfy all the above requirements and, from which, all the existing fourth-order six-stage RK methods can be produced. The new method is more efficient than the other optimized methods, for acoustic computations

  1. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  2. Computational Methods for Nanoscale X-ray Computed Tomography Image Analysis of Fuel Cell and Battery Materials

    Science.gov (United States)

    Kumar, Arjun S.

    Over the last fifteen years, there has been a rapid growth in the use of high resolution X-ray computed tomography (HRXCT) imaging in material science applications. We use it at nanoscale resolutions up to 50 nm (nano-CT) for key research problems in large scale operation of polymer electrolyte membrane fuel cells (PEMFC) and lithium-ion (Li-ion) batteries in automotive applications. PEMFC are clean energy sources that electrochemically react with hydrogen gas to produce water and electricity. To reduce their costs, capturing their electrode nanostructure has become significant in modeling and optimizing their performance. For Li-ion batteries, a key challenge in increasing their scope for the automotive industry is Li metal dendrite growth. Li dendrites are structures of lithium with 100 nm features of interest that can grow chaotically within a battery and eventually lead to a short-circuit. HRXCT imaging is an effective diagnostics tool for such applications as it is a non-destructive method of capturing the 3D internal X-ray absorption coefficient of materials from a large series of 2D X-ray projections. Despite a recent push to use HRXCT for quantitative information on material samples, there is a relative dearth of computational tools in nano-CT image processing and analysis. Hence, we focus on developing computational methods for nano-CT image analysis of fuel cell and battery materials as required by the limitations in material samples and the imaging environment. The first problem we address is the segmentation of nano-CT Zernike phase contrast images. Nano-CT instruments are equipped with Zernike phase contrast optics to distinguish materials with a low difference in X-ray absorption coefficient by phase shifting the X-ray wave that is not diffracted by the sample. However, it creates image artifacts that hinder the use of traditional image segmentation techniques. To restore such images, we setup an inverse problem by modeling the X-ray phase contrast

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  5. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  6. Computer-assisted design and computer-assisted modeling technique optimization and advantages over traditional methods of osseous flap reconstruction.

    Science.gov (United States)

    Matros, Evan; Albornoz, Claudia R; Rensberger, Michael; Weimer, Katherine; Garfein, Evan S

    2014-06-01

    There is increased clinical use of computer-assisted design (CAD) and computer-assisted modeling (CAM) for osseous flap reconstruction, particularly in the head and neck region. Limited information exists about methods to optimize the application of this new technology and for cases in which it may be advantageous over existing methods of osseous flap shaping. A consecutive series of osseous reconstructions planned with CAD/CAM over the past 5 years was analyzed. Conceptual considerations and refinements in the CAD/CAM process were evaluated. A total of 48 reconstructions were performed using CAD/CAM. The majority of cases were performed for head and neck tumor reconstruction or related complications whereas the remainder (4%) were performed for penetrating trauma. Defect location was the mandible (85%), maxilla (12.5%), and pelvis (2%). Reconstruction was performed immediately in 73% of the cases and delayed in 27% of the cases. The mean number of osseous flap bone segments used in reconstruction was 2.41. Areas of optimization include the following: mandible cutting guide placement, osteotomy creation, alternative planning, and saw blade optimization. Identified benefits of CAD/CAM over current techniques include the following: delayed timing, anterior mandible defects, specimen distortion, osteotomy creation in three dimensions, osteotomy junction overlap, plate adaptation, and maxillary reconstruction. Experience with CAD/CAM for osseous reconstruction has identified tools for technique optimization and cases where this technology may prove beneficial over existing methods. Knowledge of these facts may contribute to improved use and main-stream adoption of CAD/CAM virtual surgical planning by reconstructive surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  7. Biophysics of the Eye in Computer Vision: Methods and Advanced Technologies

    Science.gov (United States)

    Hammoud, Riad I.; Hansen, Dan Witzner

    The eyes have it! This chapter describes cutting-edge computer vision methods employed in advanced vision sensing technologies for medical, safety, and security applications, where the human eye represents the object of interest for both the imager and the computer. A camera receives light from the real eye to form a sequence of digital images of it. As the eye scans the environment, or focuses on particular objects in the scene, the computer simultaneously localizes the eye position, tracks its movement over time, and infers measures such as the attention level, and the gaze direction in real time and fully automatic. The main focus of this chapter is on computer vision and pattern recognition algorithms for eye appearance variability modeling, automatic eye detection, and robust eye position tracking. This chapter offers good readings and solid methodologies to build the two fundamental low-level building blocks of a vision-based eye tracking technology.

  8. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    International Nuclear Information System (INIS)

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  9. A Review of Computational Methods to Predict the Risk of Rupture of Abdominal Aortic Aneurysms

    Directory of Open Access Journals (Sweden)

    Tejas Canchi

    2015-01-01

    Full Text Available Computational methods have played an important role in health care in recent years, as determining parameters that affect a certain medical condition is not possible in experimental conditions in many cases. Computational fluid dynamics (CFD methods have been used to accurately determine the nature of blood flow in the cardiovascular and nervous systems and air flow in the respiratory system, thereby giving the surgeon a diagnostic tool to plan treatment accordingly. Machine learning or data mining (MLD methods are currently used to develop models that learn from retrospective data to make a prediction regarding factors affecting the progression of a disease. These models have also been successful in incorporating factors such as patient history and occupation. MLD models can be used as a predictive tool to determine rupture potential in patients with abdominal aortic aneurysms (AAA along with CFD-based prediction of parameters like wall shear stress and pressure distributions. A combination of these computer methods can be pivotal in bridging the gap between translational and outcomes research in medicine. This paper reviews the use of computational methods in the diagnosis and treatment of AAA.

  10. Towards using direct methods in seismic tomography: computation of the full resolution matrix using high-performance computing and sparse QR factorization

    Science.gov (United States)

    Bogiatzis, Petros; Ishii, Miaki; Davis, Timothy A.

    2016-05-01

    For more than two decades, the number of data and model parameters in seismic tomography problems has exceeded the available computational resources required for application of direct computational methods, leaving iterative solvers the only option. One disadvantage of the iterative techniques is that the inverse of the matrix that defines the system is not explicitly formed, and as a consequence, the model resolution and covariance matrices cannot be computed. Despite the significant effort in finding computationally affordable approximations of these matrices, challenges remain, and methods such as the checkerboard resolution tests continue to be used. Based upon recent developments in sparse algorithms and high-performance computing resources, we show that direct methods are becoming feasible for large seismic tomography problems. We demonstrate the application of QR factorization in solving the regional P-wave structure and computing the full resolution matrix with 267 520 model parameters.

  11. A mixed-methods framework for analyzing text data: Integrating computational techniques with qualitative methods in demography

    Directory of Open Access Journals (Sweden)

    Parijat Chakrabarti

    2017-11-01

    Full Text Available Background: Automated text analysis is widely used across the social sciences, yet the application of these methods has largely proceeded independently of qualitative analysis. Objective: This paper explores the advantages of applying automated text analysis to augment traditional qualitative methods in demography. Computational text analysis does not replace close reading or subjective theorizing, but it can provide a complementary set of tools that we believe will be appealing for qualitative demographers. Methods: We apply topic modeling to text data from the Malawi Journals Project as a case study. Results: We examine three common issues that demographers face in analyzing qualitative data: large samples, the challenge of comparing qualitative data across external categories, and making data analysis transparent and readily accessible to other scholars. We discuss ways that new tools from machine learning and computer science might help qualitative scholars to address these issues. Conclusions: We believe that there is great promise in mixed-method approaches to analyzing text. New methods that allow better access to data and new ways to approach qualitative data are likely to be fertile ground for research. Contribution: No research, to our knowledge, has used automated text analysis to take an explicitly mixed-method approach to the analysis of textual data. We develop a framework that allows qualitative researchers to do so.

  12. Kas erivajadustega lapsed saavad õigel ajal abi? / Ene Mägi, Urve Raudsepp-Alt, Ale Sprenk, Peeter Aas

    Index Scriptorium Estoniae

    2009-01-01

    Küsimusele vastavad: Tallinna Ülikooli Kasvatusteaduste Instituudi eri- ja sotsiaalpedagoogika osakonna juhataja Ene Mägi, Tallinna Haridusameti üldhariduse osakonna peaspetsialist Urve Raudsepp-Alt, Krabi põhikooli direktor Ale Sprenk, Põlva Maavalitsuse haridus-, kultuuri- ja sotsiaalosakonna juhataja Peeter Aas

  13. Mida on vaja teha õpilaskodude arendamiseks? / Ale Sprenk, Karin Saare, Merike Mändla, Mailis Reps...[jt.

    Index Scriptorium Estoniae

    2008-01-01

    Küsimusele vastavad Krabi põhikooli direktor, õpilaskodu mõtte algataja Ale Sprenk, Kasari põhikooli direktor Karin Saare, haridus- ja teadusministeeriumi üldharidusosakonna peaekspert Merike Mändla, riigikogu liige, endine haridusminister Mailis Reps ja Otepää gümnaasiumi direktor Aivo Meema

  14. Optimization studies of HgSe thin film deposition by electrochemical atomic layer epitaxy (EC-ALE)

    CSIR Research Space (South Africa)

    Venkatasamy, V

    2006-06-01

    Full Text Available Studies of the optimization of HgSe thin film deposition using electrochemical atomic layer epitaxy (EC-ALE) are reported. Cyclic voltammetry was used to obtain approximate deposition potentials for each element. These potentials were then coupled...

  15. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  16. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    Directory of Open Access Journals (Sweden)

    Yonghua Xiong

    2014-01-01

    Full Text Available This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU virtualization and mobile agent for mobile transparent computing (MTC to devise a method of managing shared resources and services management (SRSM. It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user’s requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  17. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  18. A novel resource management method of providing operating system as a service for mobile transparent computing.

    Science.gov (United States)

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  19. Recent advances in computational methods and clinical applications for spine imaging

    CERN Document Server

    Glocker, Ben; Klinder, Tobias; Li, Shuo

    2015-01-01

    This book contains the full papers presented at the MICCAI 2014 workshop on Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together scientists and clinicians in the field of computational spine imaging. The chapters included in this book present and discuss the new advances and challenges in these fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modeling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis. The book also includes papers and reports from the first challenge on vertebra segmentation held at the workshop.

  20. Moving finite elements: A continuously adaptive method for computational fluid dynamics

    International Nuclear Information System (INIS)

    Glasser, A.H.; Miller, K.; Carlson, N.

    1991-01-01

    Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor fabrication, biophysics, automobile and aircraft design, industrial fluid processing, chemical engineering, and combustion research. The improvements made possible by the new method could thus have substantial economic impact. Moving Finite Elements is a moving node adaptive grid method which has a tendency to pack the grid finely in regions where it is most needed at each time and to leave it coarse elsewhere. It does so in a manner which is simple and automatic, and does not require a large amount of human ingenuity to apply it to each particular problem. At the same time, it often allows the time step to be large enough to advance a moving shock by many shock thicknesses in a single time step, moving the grid smoothly with the solution and minimizing the number of time steps required for the whole problem. For 2D problems (two spatial variables) the grid is composed of irregularly shaped and irregularly connected triangles which are very flexible in their ability to adapt to the evolving solution. While other adaptive grid methods have been developed which share some of these desirable properties, this is the only method which combines them all. In many cases, the method can save orders of magnitude of computing time, equivalent to several generations of advancing computer hardware

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  2. Work in process level definition: a method based on computer simulation and electre tri

    Directory of Open Access Journals (Sweden)

    Isaac Pergher

    2014-09-01

    Full Text Available This paper proposes a method for defining the levels of work in progress (WIP in productive environments managed by constant work in process (CONWIP policies. The proposed method combines the approaches of Computer Simulation and Electre TRI to support estimation of the adequate level of WIP and is presented in eighteen steps. The paper also presents an application example, performed on a metalworking company. The research method is based on Computer Simulation, supported by quantitative data analysis. The main contribution of the paper is its provision of a structured way to define inventories according to demand. With this method, the authors hope to contribute to the establishment of better capacity plans in production environments.

  3. PSD computations using Welch's method. [Power Spectral Density (PSD)

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Jr, O M

    1991-12-01

    This report describes Welch's method for computing Power Spectral Densities (PSDs). We first describe the bandpass filter method which uses filtering, squaring, and averaging operations to estimate a PSD. Second, we delineate the relationship of Welch's method to the bandpass filter method. Third, the frequency domain signal-to-noise ratio for a sine wave in white noise is derived. This derivation includes the computation of the noise floor due to quantization noise. The signal-to-noise ratio and noise flood depend on the FFT length and window. Fourth, the variance the Welch's PSD is discussed via chi-square random variables and degrees of freedom. This report contains many examples, figures and tables to illustrate the concepts. 26 refs.

  4. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    Science.gov (United States)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  5. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  6. Realization of the Evristic Combination Methods by Means of Computer Graphics

    Directory of Open Access Journals (Sweden)

    S. A. Novoselov

    2012-01-01

    Full Text Available The paper looks at the ways of enhancing and stimulating the creative activity and initiative of pedagogic students – the prospective specialists called for educating and upbringing socially and professionally competent, originally thinking, versatile personalities. For developing their creative abilities the author recommends introducing the heuristic combination methods, applied for engineering creativity facilitation; associative-synectic technology; and computer graphics tools. The paper contains the comparative analysis of the main heuristic method operations and the computer graphics redactor in creating a visual composition. The examples of implementing the heuristic combination methods are described along with the extracts of the laboratory classes designed for creativity and its motivation developments. The approbation of the given method in the several universities confirms the prospects of enhancing the students’ learning and creative activities. 

  7. Analysis the evaluation of reinforces concrete structure Block 62 by Non Destructive Method, Destructive Method and Esteem Computer Program

    International Nuclear Information System (INIS)

    Mohd Jamil Hashim; Norhazwani Mohd Azahari

    2012-01-01

    The evaluation of old and unrecorded building is a difficult task to work on. This is because no detail record of building component such as reinforce concrete strength test record, type of reinforcement used, construction methods and soil investigation (SI) which make it impossible to analyse. Through NDT building reinforced concrete component is easily evaluated and mean while DT method give assurance through actual sample testing. From these early result detail drawing plans can be rebuild and building forensic work can be done. These data will be fed into the computer program to produce a structure evaluation result whether it is safe or not in accordance to design standard BS8110. (author)

  8. A substructure method to compute the 3D fluid-structure interaction during blowdown

    International Nuclear Information System (INIS)

    Guilbaud, D.; Axisa, F.; Gantenbein, F.; Gibert, R.J.

    1983-08-01

    The waves generated by a sudden rupture of a PWR primary pipe have an important mechanical effect on the internal structures of the vessel. This fluid-structure interaction has a strong 3D aspect. 3D finite element explicit methods can be applied. These methods take into account the non linearities of the problem but the calculation is heavy and expensive. We describe in this paper another type of method based on a substructure procedure: the vessel, internals and contained fluid are axisymmetrically described (AQUAMODE computer code). The pipes and contained fluid are monodimensionaly described (TEDEL-FLUIDE Computer Code). These substructures are characterized by their natural modes. Then, they are connected to another (connection of both structural and fluid nodes) the TRISTANA Computer Code. This method allows to compute correctly and cheaply the 3D fluid-structure effects. The treatment of certain non linearities is difficult because of the modal characterization of the substructures. However variations of contact conditions versus time can be introduced. We present here some validation tests and comparison with experimental results of the litterature

  9. Computational analysis of thermal transfer and related phenomena based on the Fourier method

    Science.gov (United States)

    Vala, Jiří; Jarošová, Petra

    2017-07-01

    Modelling and simulation of thermal processes, based on the principles of classical thermodynamics, requires numerical analysis of partial differential equations of evolution of the parabolic type. This paper demonstrates how the generalized Fourier method can be applied to the development of robust and effective computational algorithms, with the direct application to the design and performance of buildings with controlled energy consumption.

  10. A method for computing the inter-residue interaction potentials for ...

    Indian Academy of Sciences (India)

    PRAKASH KUMAR

    2007-06-16

    Jun 16, 2007 ... [Luthra A, Jha A N, Ananthasuresh G K and Vishveswara S 2007 A method for computing the inter-residue interaction potentials for reduced amino acid alphabet; J. ... Therefore, a systematic approach to this problem is warranted so ... chemical and biological or quantitative measures are used. Dayhoff et al ...

  11. Computer Aided Methods & Tools for Separation & Purification of Fine Chemical & Pharmaceutical Products

    DEFF Research Database (Denmark)

    Afonso, Maria B.C.; Soni, Vipasha; Mitkowski, Piotr Tomasz

    2006-01-01

    An integrated approach that is particularly suitable for solving problems related to product-process design from the fine chemicals, agrochemicals, food and pharmaceutical industries is presented together with the corresponding methods and tools, which forms the basis for an integrated computer...

  12. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    Science.gov (United States)

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  13. Equation of teachers primary school course computer with learning method based on imaging

    Directory of Open Access Journals (Sweden)

    Елена Сергеевна Пучкова

    2011-03-01

    Full Text Available The paper considers the possibility of training future teachers with the rate of computer methods of teaching through the creation of visual imagery and operate them, еxamples of practice-oriented assignments, formative professional quality based on explicit and implicit use of a visual image, which decision is based on the cognitive function of visibility.

  14. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer, and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method, which requires

  15. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    2005-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method which requires

  16. Numerical sensitivity computation for discontinuous gradient-only optimization problems using the complex-step method

    CSIR Research Space (South Africa)

    Wilke, DN

    2012-07-01

    Full Text Available , and is based on a Taylor series expansion using a pure imaginary step. The complex-step method is not subject to subtraction errors as with finite difference approaches when computing first order sensitivities and therefore allows for much smaller step sizes...

  17. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    Science.gov (United States)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  18. Type-2 fuzzy set extension of DEMATEL method combined with perceptual computing for decision making

    Science.gov (United States)

    Hosseini, Mitra Bokaei; Tarokh, Mohammad Jafar

    2013-05-01

    Most decision making methods used to evaluate a system or demonstrate the weak and strength points are based on fuzzy sets and evaluate the criteria with words that are modeled with fuzzy sets. The ambiguity and vagueness of the words and different perceptions of a word are not considered in these methods. For this reason, the decision making methods that consider the perceptions of decision makers are desirable. Perceptual computing is a subjective judgment method that considers that words mean different things to different people. This method models words with interval type-2 fuzzy sets that consider the uncertainty of the words. Also, there are interrelations and dependency between the decision making criteria in the real world; therefore, using decision making methods that cannot consider these relations is not feasible in some situations. The Decision-Making Trail and Evaluation Laboratory (DEMATEL) method considers the interrelations between decision making criteria. The current study used the combination of DEMATEL and perceptual computing in order to improve the decision making methods. For this reason, the fuzzy DEMATEL method was extended into type-2 fuzzy sets in order to obtain the weights of dependent criteria based on the words. The application of the proposed method is presented for knowledge management evaluation criteria.

  19. Proceeding of 1998-workshop on MHD computations. Study on numerical methods related to plasma confinement

    Energy Technology Data Exchange (ETDEWEB)

    Kako, T.; Watanabe, T. [eds.

    1999-04-01

    This is the proceeding of 'Study on Numerical Methods Related to Plasma Confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. These are also various talks on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. The 14 papers are indexed individually. (J.P.N.)

  20. Proceeding of 1998-workshop on MHD computations. Study on numerical methods related to plasma confinement

    International Nuclear Information System (INIS)

    Kako, T.; Watanabe, T.

    1999-04-01

    This is the proceeding of 'Study on Numerical Methods Related to Plasma Confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. These are also various talks on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. The 14 papers are indexed individually. (J.P.N.)