WorldWideScience

Sample records for ale computational methods

  1. An Invariant-Preserving ALE Method for Solids under Extreme Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sambasivan, Shiv Kumar [Los Alamos National Laboratory; Christon, Mark A [Los Alamos National Laboratory

    2012-07-17

    We are proposing a fundamentally new approach to ALE methods for solids undergoing large deformation due to extreme loading conditions. Our approach is based on a physically-motivated and mathematically rigorous construction of the underlying Lagrangian method, vector/tensor reconstruction, remapping, and interface reconstruction. It is transformational because it deviates dramatically from traditionally accepted ALE methods and provides the following set of unique attributes: (1) a three-dimensional, finite volume, cell-centered ALE framework with advanced hypo-/hyper-elasto-plastic constitutive theories for solids; (2) a new physically and mathematically consistent reconstruction method for vector/tensor fields; (3) advanced invariant-preserving remapping algorithm for vector/tensor quantities; (4) moment-of-fluid (MoF) interface reconstruction technique for multi-material problems with solids undergoing large deformations. This work brings together many new concepts, that in combination with emergent cell-centered Lagrangian hydrodynamics methods will produce a cutting-edge ALE capability and define a new state-of-the-art. Many ideas in this work are new, completely unexplored, and hence high risk. The proposed research and the resulting algorithms will be of immediate use in Eulerian, Lagrangian and ALE codes under the ASC program at the lab. In addition, the research on invariant preserving reconstruction/remap of tensor quantities is of direct interest to ongoing CASL and climate modeling efforts at LANL. The application space impacted by this work includes Inertial Confinement Fusion (ICF), Z-pinch, munition-target interactions, geological impact dynamics, shock processing of powders and shaped charges. The ALE framework will also provide a suitable test-bed for rapid development and assessment of hypo-/hyper-elasto-plastic constitutive theories. Today, there are no invariant-preserving ALE algorithms for treating solids with large deformations. Therefore

  2. Application of the 3D Iced-Ale method to equilibrium and stability problems of a magnetically confined plasma

    International Nuclear Information System (INIS)

    Barnes, D.C.; Brackbill, J.U.

    1977-01-01

    A numerical study of the equilibrium and stability properties of the Scyllac experiment at Los Alamos is described. The formulation of the numerical method, which is an extension of the ICED-ALE method to magnetohydrodynamic flow in three dimensions, is given. The properties of the method are discussed, including low computational diffusion, local conservation, and implicit formulation in the time variable. Also discussed are the problems encountered in applying boundary conditions and computing equilibria. The results of numerical computations of equilibria indicate that the helical field amplitudes must be doubled from their design values to produce equilibrium in the Scyllac experiment. This is consistent with other theoretical and experimental results

  3. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    International Nuclear Information System (INIS)

    Koniges, A.; Eder, E.; Liu, W.; Barnard, J.; Friedman, A.; Logan, G.; Fisher, A.; Masers, N.; Bertozzi, A.

    2011-01-01

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  4. A coupling of empirical explosive blast loads to ALE air domains in LS-DYNA (registered)

    International Nuclear Information System (INIS)

    Slavik, Todd P

    2010-01-01

    A coupling method recently implemented in LS-DYNA (registered) allows empirical explosive blast loads to be applied to air domains treated with the multi-material arbitrary Lagrangian-Eulerian (ALE) formulation. Previously, when simulating structures subjected to blast loads, two methods of analysis were available: a purely Lagrangian approach or one involving the ALE and Lagrangian formulations coupled with a fluid-structure interaction (FSI) algorithm. In the former, air blast pressure is computed with empirical equations and directly applied to Lagrangian elements of the structure. In the latter approach, the explosive as well as the air are explicitly modeled and the blast wave propagating through the ALE air domain impinges on the Lagrangian structure through FSI. Since the purely Lagrangian approach avoids modeling the air between the explosive and structure, a significant computational cost savings can be realized - especially so when large standoff distances are considered. The shortcoming of the empirical blast equations is their inability to account for focusing or shadowing of the blast waves due to their interaction with structures which may intervene between the explosive and primary structure of interest. The new method presented here obviates modeling the explosive and air leading up the structure. Instead, only the air immediately surrounding the Lagrangian structures need be modeled with ALE, while effects of the far-field blast are applied to the outer face of that ALE air domain with the empirical blast equations; thus, focusing and shadowing effects can be accommodated yet computational costs are kept to a minimum. Comparison of the efficiency and accuracy of this new method with other approaches shows that the ability of LS-DYNA (registered) to model a variety of new blast scenarios has been greatly extended.

  5. ALE finite volume method for free-surface Bingham plastic fluids with general curvilinear coordinates

    International Nuclear Information System (INIS)

    Nagai, Katsuaki; Ushijima, Satoru

    2010-01-01

    A numerical prediction method has been proposed to predict Bingham plastic fluids with free-surface in a two-dimensional container. Since the linear relationships between stress tensors and strain rate tensors are not assumed for non-Newtonian fluids, the liquid motions are described with Cauchy momentum equations rather than Navier-Stokes equations. The profile of a liquid surface is represented with the two-dimensional curvilinear coordinates which are represented in each computational step on the basis of the arbitrary Lagrangian-Eulerian (ALE) method. Since the volumes of the fluid cells are transiently changed in the physical space, the geometric conservation law is applied to the finite volume discretizations. As a result, it has been shown that the present method enables us to predict reasonably the Bingham plastic fluids with free-surface in a container.

  6. ALE finite volume method for free-surface Bingham plastic fluids with general curvilinear coordinates

    Science.gov (United States)

    Nagai, Katsuaki; Ushijima, Satoru

    2010-06-01

    A numerical prediction method has been proposed to predict Bingham plastic fluids with free-surface in a two-dimensional container. Since the linear relationships between stress tensors and strain rate tensors are not assumed for non-Newtonian fluids, the liquid motions are described with Cauchy momentum equations rather than Navier-Stokes equations. The profile of a liquid surface is represented with the two-dimensional curvilinear coordinates which are represented in each computational step on the basis of the arbitrary Lagrangian-Eulerian (ALE) method. Since the volumes of the fluid cells are transiently changed in the physical space, the geometric conservation law is applied to the finite volume discretizations. As a result, it has been shown that the present method enables us to predict reasonably the Bingham plastic fluids with free-surface in a container.

  7. A Cell-Centered Multiphase ALE Scheme With Structural Coupling

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, Timothy Alan [Univ. of California, Davis, CA (United States)

    2012-04-16

    A novel computational scheme has been developed for simulating compressible multiphase flows interacting with solid structures. The multiphase fluid is computed using a Godunov-type finite-volume method. This has been extended to allow computations on moving meshes using a direct arbitrary-Eulerian- Lagrangian (ALE) scheme. The method has been implemented within a Lagrangian hydrocode, which allows modeling the interaction with Lagrangian structural regions. Although the above scheme is general enough for use on many applications, the ultimate goal of the research is the simulation of heterogeneous energetic material, such as explosives or propellants. The method is powerful enough for application to all stages of the problem, including the initial burning of the material, the propagation of blast waves, and interaction with surrounding structures. The method has been tested on a number of canonical multiphase tests as well as fluid-structure interaction problems.

  8. An AMR capable finite element diffusion solver for ALE hydrocodes [An AMR capable diffusion solver for ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bailey, D. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kaiser, T. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Eder, D. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, B. T. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Masters, N. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Koniges, A. E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Anderson, R. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-02-01

    Here, we present a novel method for the solution of the diffusion equation on a composite AMR mesh. This approach is suitable for including diffusion based physics modules to hydrocodes that support ALE and AMR capabilities. To illustrate, we proffer our implementations of diffusion based radiation transport and heat conduction in a hydrocode called ALE-AMR. Numerical experiments conducted with the diffusion solver and associated physics packages yield 2nd order convergence in the L2 norm.

  9. Lagrangian and ALE Formulations For Soil Structure Coupling with Explosive Detonation

    Directory of Open Access Journals (Sweden)

    M Souli

    2017-03-01

    Full Text Available Simulation of Soil-Structure Interaction becomes more and more the focus of computational engineering in civil and mechanical engineering, where FEM (Finite element Methods for structural and soil mechanics and Finite Volume for CFD are dominant. New formulations have been developed for FSI applications using ALE (Arbitrary Lagrangian Eulerian and mesh free methods as SPH method, (Smooth Particle Hydrodynamic. In defence industry, engineers have been developing protection systems for many years to reduce the vulnerability of light armoured vehicles (LAV against mine blast using classical Lagrangian FEM methods. To improve simulations and assist in the development of these protections, experimental tests, and new numerical techniques are performed. To carry out these numerical calculations, initial conditions such as the loading prescribed by a mine on a structure need to be simulated adequately. The effects of blast on structures depend often on how these initial conditions are estimated and applied. In this report, two methods were used to simulate a mine blast: the classical Lagrangian and the ALE formulations. The comparative study was done for a simple and a more complex target. Particle methods as SPH method can also be used for soil structure interaction.

  10. Modeling warm dense matter experiments using the 3D ALE-AMR code and the move toward exascale computing

    International Nuclear Information System (INIS)

    Koniges, A.; Liu, W.; Barnard, J.; Friedman, A.; Logan, G.; Eder, D.; Fisher, A.; Masters, N.; Bertozzi, A.

    2013-01-01

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li + ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. We also briefly discuss the effects of the move to exascale computing and related computational changes on general modeling codes in fusion. (authors)

  11. Coastal Improvements for Tide Models: The Impact of ALES Retracker

    Directory of Open Access Journals (Sweden)

    Gaia Piccioni

    2018-05-01

    Full Text Available Since the launch of the first altimetry satellites, ocean tide models have been improved dramatically for deep and shallow waters. However, issues are still found for areas of great interest for climate change investigations: the coastal regions. The purpose of this study is to analyze the influence of the ALES coastal retracker on tide modeling in these regions with respect to a standard open ocean retracker. The approach used to compute the tidal constituents is an updated and along-track version of the Empirical Ocean Tide model developed at DGFI-TUM. The major constituents are derived from a least-square harmonic analysis of sea level residuals based on the FES2014 tide model. The results obtained with ALES are compared with the ones estimated with the standard product. A lower fitting error is found for the ALES solution, especially for distances closer than 20 km from the coast. In comparison with in situ data, the root mean squared error computed with ALES can reach an improvement larger than 2 cm at single locations, with an average impact of over 10% for tidal constituents K 2 , O 1 , and P 1 . For Q 1 , the improvement is over 25%. It was observed that improvements to the root-sum squares are larger for distances closer than 10 km to the coast, independently on the sea state. Finally, the performance of the solutions changes according to the satellite’s flight direction: for tracks approaching land from open ocean root mean square differences larger than 1 cm are found in comparison to tracks going from land to ocean.

  12. Time-Discrete Higher-Order ALE Formulations: Stability

    KAUST Repository

    Bonito, Andrea

    2013-01-01

    Arbitrary Lagrangian Eulerian (ALE) formulations deal with PDEs on deformable domains upon extending the domain velocity from the boundary into the bulk with the purpose of keeping mesh regularity. This arbitrary extension has no effect on the stability of the PDE but may influence that of a discrete scheme. We examine this critical issue for higher-order time stepping without space discretization. We propose time-discrete discontinuous Galerkin (dG) numerical schemes of any order for a time-dependent advection-diffusion-model problem in moving domains, and study their stability properties. The analysis hinges on the validity of the Reynold\\'s identity for dG. Exploiting the variational structure and assuming exact integration, we prove that our conservative and nonconservative dG schemes are equivalent and unconditionally stable. The same results remain true for piecewise polynomial ALE maps of any degree and suitable quadrature that guarantees the validity of the Reynold\\'s identity. This approach generalizes the so-called geometric conservation law to higher-order methods. We also prove that simpler Runge-Kutta-Radau methods of any order are conditionally stable, that is, subject to a mild ALE constraint on the time steps. Numerical experiments corroborate and complement our theoretical results. © 2013 Society for Industrial and Applied Mathematics.

  13. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea

    2013-03-16

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our estimates hold without any restrictions on the time steps for dG with exact integration or Reynolds\\' quadrature. They involve a mild restriction on the time steps for the practical Runge-Kutta-Radau methods of any order. The key ingredients are the stability results shown earlier in Bonito et al. (Time-discrete higher order ALE formulations: stability, 2013) along with a novel ALE projection. Numerical experiments illustrate and complement our theoretical results. © 2013 Springer-Verlag Berlin Heidelberg.

  14. Measuring Extinction with ALE

    Science.gov (United States)

    Zimmer, Peter C.; McGraw, J. T.; Gimmestad, G. G.; Roberts, D.; Stewart, J.; Smith, J.; Fitch, J.

    2007-12-01

    ALE (Astronomical LIDAR for Extinction) is deployed at the University of New Mexico's (UNM) Campus Observatory in Albuquerque, NM. It has begun a year-long testing phase prior deployment at McDonald Observatory in support of the CCD/Transit Instrument II (CTI-II). ALE is designed to produce a high-precision measurement of atmospheric absorption and scattering above the observatory site every ten minutes of every moderately clear night. LIDAR (LIght Detection And Ranging) is the VIS/UV/IR analog of radar, using a laser, telescope and time-gated photodetector instead of a radio transmitter, dish and receiver. In the case of ALE -- an elastic backscatter LIDAR -- 20ns-long, eye-safe laser pulses are launched 2500 times per second from a 0.32m transmitting telescope co-mounted with a 50mm short-range receiver on an alt-az mounted 0.67m long-range receiver. Photons from the laser pulse are scattered and absorbed as the pulse propagates through the atmosphere, a portion of which are scattered into the field of view of the short- and long-range receiver telescopes and detected by a photomultiplier. The properties of a given volume of atmosphere along the LIDAR path are inferred from both the altitude-resolved backscatter signal as well as the attenuation of backscatter signal from altitudes above it. We present ALE profiles from the commissioning phase and demonstrate some of the astronomically interesting atmospheric information that can be gleaned from these data, including, but not limited to, total line-of-sight extinction. This project is funded by NSF Grant 0421087.

  15. Deposition of HgTe by electrochemical atomic layer epitaxy (EC-ALE)

    CSIR Research Space (South Africa)

    Venkatasamy, V

    2006-04-01

    Full Text Available This paper describes the first instance of HgTe growth by electrochemical atomic layer epitaxy (EC-ALE). EC-ALE is the electrochemical analog of atomic layer epitaxy (ALE) and atomic layer deposition (ALD), all of which are based on the growth...

  16. An asymptotic preserving multidimensional ALE method for a system of two compressible flows coupled with friction

    Science.gov (United States)

    Del Pino, S.; Labourasse, E.; Morel, G.

    2018-06-01

    We present a multidimensional asymptotic preserving scheme for the approximation of a mixture of compressible flows. Fluids are modelled by two Euler systems of equations coupled with a friction term. The asymptotic preserving property is mandatory for this kind of model, to derive a scheme that behaves well in all regimes (i.e. whatever the friction parameter value is). The method we propose is defined in ALE coordinates, using a Lagrange plus remap approach. This imposes a multidimensional definition and analysis of the scheme.

  17. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    Energy Technology Data Exchange (ETDEWEB)

    Noble, Charles R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Anderson, Andrew T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barton, Nathan R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bramwell, Jamie A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Capps, Arlie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chang, Michael H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chou, Jin J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dawson, David M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Diana, Emily R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, Timothy A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Faux, Douglas R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fisher, Aaron C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Heinz, Ines [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kanarska, Yuliya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Khairallah, Saad A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Liu, Benjamin T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Margraf, Jon D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nichols, Albert L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Puso, Michael A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reus, James F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Robinson, Peter B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shestakov, Alek I. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Solberg, Jerome M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Taller, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tsuji, Paul H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Christopher A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Jeremy L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-23

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  18. Parallel Dynamic Analysis of a Large-Scale Water Conveyance Tunnel under Seismic Excitation Using ALE Finite-Element Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wang

    2016-01-01

    Full Text Available Parallel analyses about the dynamic responses of a large-scale water conveyance tunnel under seismic excitation are presented in this paper. A full three-dimensional numerical model considering the water-tunnel-soil coupling is established and adopted to investigate the tunnel’s dynamic responses. The movement and sloshing of the internal water are simulated using the multi-material Arbitrary Lagrangian Eulerian (ALE method. Nonlinear fluid–structure interaction (FSI between tunnel and inner water is treated by using the penalty method. Nonlinear soil-structure interaction (SSI between soil and tunnel is dealt with by using the surface to surface contact algorithm. To overcome computing power limitations and to deal with such a large-scale calculation, a parallel algorithm based on the modified recursive coordinate bisection (MRCB considering the balance of SSI and FSI loads is proposed and used. The whole simulation is accomplished on Dawning 5000 A using the proposed MRCB based parallel algorithm optimized to run on supercomputers. The simulation model and the proposed approaches are validated by comparison with the added mass method. Dynamic responses of the tunnel are analyzed and the parallelism is discussed. Besides, factors affecting the dynamic responses are investigated. Better speedup and parallel efficiency show the scalability of the parallel method and the analysis results can be used to aid in the design of water conveyance tunnels.

  19. The LOCAL attack: Cryptanalysis of the authenticated encryption scheme ALE

    DEFF Research Database (Denmark)

    Khovratovich, Dmitry; Rechberger, Christian

    2014-01-01

    We show how to produce a forged (ciphertext, tag) pair for the scheme ALE with data and time complexity of 2102 ALE encryptions of short messages and the same number of authentication attempts. We use a differential attack based on a local collision, which exploits the availability of extracted...

  20. On the potential of computational methods and numerical simulation in ice mechanics

    International Nuclear Information System (INIS)

    Bergan, Paal G; Cammaert, Gus; Skeie, Geir; Tharigopula, Venkatapathi

    2010-01-01

    This paper deals with the challenge of developing better methods and tools for analysing interaction between sea ice and structures and, in particular, to be able to calculate ice loads on these structures. Ice loads have traditionally been estimated using empirical data and 'engineering judgment'. However, it is believed that computational mechanics and advanced computer simulations of ice-structure interaction can play an important role in developing safer and more efficient structures, especially for irregular structural configurations. The paper explains the complexity of ice as a material in computational mechanics terms. Some key words here are large displacements and deformations, multi-body contact mechanics, instabilities, multi-phase materials, inelasticity, time dependency and creep, thermal effects, fracture and crushing, and multi-scale effects. The paper points towards the use of advanced methods like ALE formulations, mesh-less methods, particle methods, XFEM, and multi-domain formulations in order to deal with these challenges. Some examples involving numerical simulation of interaction and loads between level sea ice and offshore structures are presented. It is concluded that computational mechanics may prove to become a very useful tool for analysing structures in ice; however, much research is still needed to achieve satisfactory reliability and versatility of these methods.

  1. Compatible, energy conserving, bounds preserving remap of hydrodynamic fields for an extended ALE scheme

    Science.gov (United States)

    Burton, D. E.; Morgan, N. R.; Charest, M. R. J.; Kenamond, M. A.; Fung, J.

    2018-02-01

    From the very origins of numerical hydrodynamics in the Lagrangian work of von Neumann and Richtmyer [83], the issue of total energy conservation as well as entropy production has been problematic. Because of well known problems with mesh deformation, Lagrangian schemes have evolved into Arbitrary Lagrangian-Eulerian (ALE) methods [39] that combine the best properties of Lagrangian and Eulerian methods. Energy issues have persisted for this class of methods. We believe that fundamental issues of energy conservation and entropy production in ALE require further examination. The context of the paper is an ALE scheme that is extended in the sense that it permits cyclic or periodic remap of data between grids of the same or differing connectivity. The principal design goals for a remap method then consist of total energy conservation, bounded internal energy, and compatibility of kinetic energy and momentum. We also have secondary objectives of limiting velocity and stress in a non-directional manner, keeping primitive variables monotone, and providing a higher than second order reconstruction of remapped variables. In particular, the new contributions fall into three categories associated with: energy conservation and entropy production, reconstruction and bounds preservation of scalar and tensor fields, and conservative remap of nonlinear fields. The paper presents a derivation of the methods, details of implementation, and numerical results for a number of test problems. The methods requires volume integration of polynomial functions in polytopal cells with planar facets, and the requisite expressions are derived for arbitrary order.

  2. Free surface modeling of contacting solid metal flows employing the ALE formulation

    NARCIS (Netherlands)

    van der Stelt, A.A.; Bor, Teunis Cornelis; Geijselaers, Hubertus J.M.; Akkerman, Remko; Huetink, Han; Merklein, M.; Hagenah, H.

    2012-01-01

    In this paper, a numerical problem with contacting solid metal flows is presented and solved with an arbitrary Lagrangian-Eulerian (ALE) finite element method. The problem consists of two domains which mechanically interact with each other. For this simulation a new free surface boundary condition

  3. Viñales Taxonomic Characterization and trophic groups of two communities of birds associated to semideciduos forests and vegetation of Pine-oak of the paths «Marvels of Viñales» and «Valley Ancón» in Viñales National Park

    Directory of Open Access Journals (Sweden)

    Miguel Cué Rivero

    2015-06-01

    Full Text Available The present work was carried out in the months of February to April 2009 in the semideciduos forest «Marvels of Viñales» and the formation pine-encino of the «Valley Ancón» of the Viñales National Park and it pursued as main objective to characterize the taxonomic composition and tropic groups tof two communities of birds associated to semideciduos forest and pine oak vegetation from both -. The method of circular parcels of fixed radio was used in 30 points of counts separated to 150 m one of another. There were detected a total of 44 species of birds (in the semideciduo and 42 in pine-oak contained in 9 orders and 18 families. They registered 23 trophic groups with prevalence of Insectivorous. The communities of birds of the formation of semideciduo forest of the path «Marvels of Viñales» and of the forest of pine oak of «Valley Ancón» presented differences in its taxonomic composition The communities of birds of both vegetable formations showed differences as for their trophiccomposition but so much in one as in other majority of birds consumers of insects and grains was observed.

  4. Affective mapping: An activation likelihood estimation (ALE) meta-analysis.

    Science.gov (United States)

    Kirby, Lauren A J; Robinson, Jennifer L

    2017-11-01

    Functional neuroimaging has the spatial resolution to explain the neural basis of emotions. Activation likelihood estimation (ALE), as opposed to traditional qualitative meta-analysis, quantifies convergence of activation across studies within affective categories. Others have used ALE to investigate a broad range of emotions, but without the convenience of the BrainMap database. We used the BrainMap database and analysis resources to run separate meta-analyses on coordinates reported for anger, anxiety, disgust, fear, happiness, humor, and sadness. Resultant ALE maps were compared to determine areas of convergence between emotions, as well as to identify affect-specific networks. Five out of the seven emotions demonstrated consistent activation within the amygdala, whereas all emotions consistently activated the right inferior frontal gyrus, which has been implicated as an integration hub for affective and cognitive processes. These data provide the framework for models of affect-specific networks, as well as emotional processing hubs, which can be used for future studies of functional or effective connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Methods for simulation-based analysis of fluid-structure interaction.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  6. Aliança estratégica no canal de marketing: o caso ALE Combustíveis S.A. Strategic alliance in the marketing channel: the Ale Combustíveis S.A. case

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Garcia Cotta

    2010-01-01

    Full Text Available O artigo analisa a estratégia utilizada pela ALE Combustíveis em uma operação desenhada para vender lubrificantes automotivos em sua rede de postos. O estudo avalia as alianças efetuadas com a Elf e posteriormente com a AC Delco, revelando as motivações, escolha dos parceiros, desenho do modelo de relacionamento, gestão das alianças e avaliação do modelo adotado, confrontando a experiência prática com as prescrições da literatura. O modelo adotado, denominado broker, caracteriza-se pela preservação da autonomia das marcas, com a ALE aportando sua estrutura e força de vendas, e o parceiro, a logística de reabastecimento e processamento de pedidos. O método do caso foi adotado como estratégia de pesquisa e o relato revela o acerto na escolha do modelo, porém falhas estratégicas na capacidade operacional e posicionamento do produto, na aliança com a Elf, e na relação de forças no canal, na aliança com a AC Delco, conduziram ao fracasso de ambas as tentativas.This paper analyses the strategy adopted by ALE Combustíveis, a Brazilian Company, in an operation designed to sell automotive lubricants at gas stations. This study reviews the alliances made with Elf and later with AC Delco, exposing ALE's motivations, partner selection, design of relationship model, alliance management and assessment of adopted model, confronting practical experience with prescriptions in the published literature. The model, named broker, is characterized by preservation of the brand's autonomy, with ALE contributing its structure and sales force, and its partner, in turn, the fuelling logistics and order processing. The case method was adopted as a research strategy and this report shows how successful the model selection proved, as well as the strategic blunders related to the operational capacity and product positioning, in the alliance with Elf, and in the forces of management interrelations within the channel, in the AC Delco alliance case

  7. Simulating Small-Scale Experiments of In-Tunnel Airblast Using STUN and ALE3D

    Energy Technology Data Exchange (ETDEWEB)

    Neuscamman, Stephanie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glenn, Lewis [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schebler, Gregory [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McMichael, Larry [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glascoe, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-09-12

    This report details continuing validation efforts for the Sphere and Tunnel (STUN) and ALE3D codes. STUN has been validated previously for blast propagation through tunnels using several sets of experimental data with varying charge sizes and tunnel configurations, including the MARVEL nuclear driven shock tube experiment (Glenn, 2001). The DHS-funded STUNTool version is compared to experimental data and the LLNL ALE3D hydrocode. In this particular study, we compare the performance of the STUN and ALE3D codes in modeling an in-tunnel airblast to experimental results obtained by Lunderman and Ohrt in a series of small-scale high explosive experiments (1997).

  8. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    International Nuclear Information System (INIS)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R; Dixit, P; Benson, D J

    2008-01-01

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets

  9. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R [Lawrence Livermore National Laboratory, PO Box 808, Livermore, CA 94551 (United States); Dixit, P; Benson, D J [University of California San Diego, 9500 Gilman Dr., La Jolla. CA 92093 (United States)], E-mail: fisher47@llnl.gov

    2008-05-15

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets.

  10. Supersymmetric 3-branes on smooth ALE manifolds with flux

    International Nuclear Information System (INIS)

    Bertolini, M.; Campos, V.L.; Ferretti, G.; Fre, P.; Salomonson, P.; Trigiante, M.

    2001-01-01

    We construct a new family of classical BPS solutions of type IIB supergravity describing 3-branes transverse to a 6-dimensional space with topology R 2 xALE. They are characterized by a non-trivial flux of the supergravity 2-forms through the homology 2-cycles of a generic smooth ALE manifold. Our solutions have two Killing spinors and thus preserve N=2 supersymmetry. They are expressed in terms of a quasi harmonic function H (the 'warp factor'), whose properties we study in the case of the simplest ALE, namely the Eguchi-Hanson manifold. The equation for H is identified as an instance of the confluent Heun equation. We write explicit power series solutions and solve the recurrence relation for the coefficients, discussing also the relevant asymptotic expansions. While, as in all such N=2 solutions, supergravity breaks down near the brane, the smoothing out of the vacuum geometry has the effect that the warp factor is regular in a region near the cycle. We interpret the behavior of the warp factor as describing a three-brane charge 'smeared' over the cycle and consider the asymptotic form of the geometry in that region, showing that conformal invariance is broken even when the complex type IIB 3-form field strength is assumed to vanish. We conclude with a discussion of the basic features of the gauge theory dual

  11. SISTEM INFORMATIC ADAPTIV „DETERMINAREA STĂRILOR PROPRII ALE MOLECULELOR DE FULLERENE”

    Directory of Open Access Journals (Sweden)

    Victor CIOBU

    2015-12-01

    Full Text Available În articol sunt studiate comportamentul şi proprietăţile fizice ale fullerenelor. Avantajul tehnologiilor informaţionale inteligente este de a construi automat programul de calcul din specificarea iniţială a problemei slab-structurate şi infor-maţiile de concretizare, furnizate de către beneficiarul problemei în cadrul dialogului cu SSD (parametrii necunoscuţi ai problemei, metoda de calcul, criteriile de optimizare etc.. SSD a fost folosit la cercetarea fullerenelor: C60, C70, C76, C82.ADAPTIVE INFORMATIC SYSTEM „DETERMINATION OF THE EIGENSTATES OF FULLERENE MOLECULES”In the present article the behavior and physical properties of fullerenes are studied. The advantage of intelligent information technologies is to build automatically computer program from the initial specification of a poorly-structured problem and the embodiment information provided by the recipient of the problem through a dialogue with DSS (unknown parameters of the problem, the method of calculation, optimization criteria etc.. DSS was used to research fullerene C60, C70, C76, C82.

  12. Magneto-Hydrodynamic Simulations of a Magnetic Flux Compression Generator Using ALE3D

    Science.gov (United States)

    2017-07-01

    3 Fig. 3 Half- plane view of the geometry used in ALE3D simulation showing the materials...to LLNL’s SESAME data.8 Fig. 3 Half- plane view of the geometry used in ALE3D simulation showing the materials There are 2 broad approaches to...of mesh can be time- consuming . Since MFCGs have a cylindrical geometry, a high-resolution mesh is not required; one can use a conformal mesh and

  13. FENOMENOLOGIA DELLA RELIGIONE DI ANGELA ALES BELLO (PHENOMENOLOGY OF RELIGION IN THE VISION OF ANGELA ALES BELLO

    Directory of Open Access Journals (Sweden)

    MOBEEN SHAHID

    2015-11-01

    Full Text Available If sacred in any way is a vehicle of religious experience there is nothing new about what it is vehicle of but if we focalize on what sacred is in itself we need a method to do so and in this regard I would mention the need of analyzing what the philosophical school of phenomenology of religion in and from Italy has been working on in the person of Angela Ales Bello as its Maestro in particular in last four decades. Several works of the author have paved the path, considering various points in scientific articles and texts but the main conceptual map is developed in the following three: Culture e religioni. Una lettura fenomenologica, Citta Nuova, Roma, 1997; a co-authored work with me Lineamenti di Antropologia Filosofica: Fenomenologia della religione ed esperienza mistica islamica, Editrice Apes, Roma, 2012; and the latest Il senso del sacro. Dall’arcaicità alla desacralizzazione, Castelvecchi, Roma, 2014.

  14. Dynamic debonding in layered structures: a coupled ALE-cohesive approach

    Directory of Open Access Journals (Sweden)

    Marco Francesco Funari

    2017-07-01

    Full Text Available . A computational formulation able to simulate crack initiation and growth in layered structural systems is proposed. In order to identify the position of the onset interfacial defects and their dynamic debonding mechanisms, a moving mesh strategy, based on Arbitrary Lagrangian-Eulerian (ALE approach, is combined with a cohesive interface methodology, in which weak based moving connections are implemented by using a finite element formulation. The numerical formulation has been implemented by means of separate steps, concerned, at first, to identify the correct position of the crack onset and, subsequently, the growth by changing the computational geometry of the interfaces. In order to verify the accuracy and to validate the proposed methodology, comparisons with experimental and numerical results are developed. In particular, results, in terms of location and speed of the debonding front, obtained by the proposed model, are compared with the ones arising from the literature. Moreover, a parametric study in terms of geometrical characteristics of the layered structure are developed. The investigation reveals the impact of the stiffening of the reinforced strip and of adhesive thickness on the dynamic debonding mechanisms.

  15. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.

    2013-01-01

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our

  16. Overview of SPH-ALE applications for hydraulic turbines in ANDRITZ Hydro

    Science.gov (United States)

    Rentschler, M.; Marongiu, J. C.; Neuhauser, M.; Parkinson, E.

    2018-02-01

    Over the past 13 years, ANDRITZ Hydro has developed an in-house tool based on the SPH-ALE method for applications in flow simulations in hydraulic turbines. The initial motivation is related to the challenging simulation of free surface flows in Pelton turbines, where highly dynamic water jets interact with rotating buckets, creating thin water jets traveling inside the housing and possibly causing disturbances on the runner. The present paper proposes an overview of industrial applications allowed by the developed tool, including design evaluation of Pelton runners and casings, transient operation of Pelton units and free surface flows in hydraulic structures.

  17. Algoritmos Wavenet con Aplicaciones en la Aproximación de Señales: un Estudio Comparativo

    Directory of Open Access Journals (Sweden)

    C.R. Domínguez Mayorga

    2012-10-01

    Full Text Available Resumen: En este trabajo de investigación se aplican métodos adaptables en el diseño de algoritmos computacionales, dichos algoritmos emplean redes neuronales y series de wavelets para construir “neuroaproximadores” wavenets. Se muestra cómo las wavenets pueden combinarse con los métodos autosintonizables para obtener el seguimiento de señles complejas que están en función del tiempo. Los algoritmos obtenidos se aplican en la aproximación de señales que representan funciones algebraicas y funciones aleatorias, así como en una señal médica deun ECG. Se muestran los resultados en simulación numérica de dos arquitecturas de neuroaproximadores wavenets: el primero está basado en una wavenet, con el cual se aproximan las señales bajo estudio donde los parámetros de la red neuronal son ajustados en línea; el otro esquema emplea un filtro IIR a la salida de la red wavenet para discriminar las contribuciones de aquellas neuronas que tienen menos peso en la aproximación de la señal, lo que ayuda a reducir el tiempo de convergencia a un error mínimo deseado. Abstract: In this paper adaptable methods for computational algorithms are presented. These algorithms use neural networks and wavelet series to build neuro wavenets approximators. The algorithms obtained are applied to the approximation of signals that represent algebraic functions and random functions, as well as a medical EKG signal. It shows how wavenets can be combined with auto-tuning methods for tracking complex signals that are a function of time. Results are shown in numerical simulation of two architectures of neural approximators wavenets: the first is based on a wavenet with which they approach the signals under study where the parameters of the neural network are adjusted online, the other neuro approximator scheme uses an IIR filter to the output of wavenet, which serves to filter the out- put, in this way

  18. Odpověď Pavla Janouška Aleši Hamanovi

    Czech Academy of Sciences Publication Activity Database

    Janoušek, Pavel

    2014-01-01

    Roč. 25, č. 2 (2014), s. 18 ISSN 0862-657X Institutional support: RVO:68378068 Keywords : Czech literature * socially engaged art * Haman, Aleš * polemics Subject RIV: AJ - Letters, Mass-media, Audiovision

  19. Compresión de señales electroencefalográficas epilépticas y normales

    Directory of Open Access Journals (Sweden)

    Fernando Cruz Roldán

    2012-02-01

    Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Las señales electroencefalográficas son empleadas en el estudio de varias enfermedades, pero generan volúmenes de datos que dificultan su procesamiento. La compresión de estas señales permite disminuir el volumen de datos adquiridos, facilitando su manipulación.   La epilepsia es una de las más comunes de esas enfermedades y las señales que registran este tipo de anomalía difieren algo de las señales electroencefalográficas comunes. En este trabajo se analiza el comportamiento de la compresión de señales que contienen episodios epilépticos, comparado con otras que no lo contienen. Se establecen un grupo de parámetros de calidad y de compresión para establecer la comparación. Como resultado se obtiene una mejor calidad y una mayor compresión cuando la señal contiene episodios epilépticos. Este resultado se asocia a la sincronización que ocurre en estos casos y la correspondiente contracción de las bandas de frecuencia con mayor contenido de información relevante.

  20. La Malakoplakie Rétropéritonéale Pseudotumorale

    African Journals Online (AJOL)

    mn

    gine tuberculeuse et le syndrome de masse rétropéritonéale, faisant discuter en première intention un sarcome des parties molles ou un lymphome5. Microscopiquement, le gra- nulome malakoplasique est caractérisé par la présence des cellules de Van Hansemann4. Ces cellules sont des histiocytes à larges cy-.

  1. Microbial diversity and metabolite composition of Belgian red-brown acidic ales.

    Science.gov (United States)

    Snauwaert, Isabel; Roels, Sanne P; Van Nieuwerburg, Filip; Van Landschoot, Anita; De Vuyst, Luc; Vandamme, Peter

    2016-03-16

    Belgian red-brown acidic ales are sour and alcoholic fermented beers, which are produced by mixed-culture fermentation and blending. The brews are aged in oak barrels for about two years, after which mature beer is blended with young, non-aged beer to obtain the end-products. The present study evaluated the microbial community diversity of Belgian red-brown acidic ales at the end of the maturation phase of three subsequent brews of three different breweries. The microbial diversity was compared with the metabolite composition of the brews at the end of the maturation phase. Therefore, mature brew samples were subjected to 454 pyrosequencing of the 16S rRNA gene (bacteria) and the internal transcribed spacer region (yeasts) and a broad range of metabolites was quantified. The most important microbial species present in the Belgian red-brown acidic ales investigated were Pediococcus damnosus, Dekkera bruxellensis, and Acetobacter pasteurianus. In addition, this culture-independent analysis revealed operational taxonomic units that were assigned to an unclassified fungal community member, Candida, and Lactobacillus. The main metabolites present in the brew samples were L-lactic acid, D-lactic acid, and ethanol, whereas acetic acid was produced in lower quantities. The most prevailing aroma compounds were ethyl acetate, isoamyl acetate, ethyl hexanoate, and ethyl octanoate, which might be of impact on the aroma of the end-products. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Analysis of Thermo-Mechanical Distortions in Sliding Components : An ALE Approach

    NARCIS (Netherlands)

    Owczarek, P.; Geijselaers, H.J.M.

    2008-01-01

    A numerical technique for analysis of heat transfer and thermal distortion in reciprocating sliding components is proposed. In this paper we utilize the Arbitrary Lagrangian Eulerian (ALE) description where the mesh displacement can be controlled independently from the material displacement. A

  3. Voxel-Based Morphometry ALE meta-analysis of Bipolar Disorder

    Science.gov (United States)

    Magana, Omar; Laird, Robert

    2012-03-01

    A meta-analysis was performed independently to view the changes in gray matter (GM) on patients with Bipolar disorder (BP). The meta-analysis was conducted on a Talairach Space using GingerALE to determine the voxels and their permutation. In order to achieve the data acquisition, published experiments and similar research studies were uploaded onto the online Voxel-Based Morphometry database (VBM). By doing so, coordinates of activation locations were extracted from Bipolar disorder related journals utilizing Sleuth. Once the coordinates of given experiments were selected and imported to GingerALE, a Gaussian was performed on all foci points to create the concentration points of GM on BP patients. The results included volume reductions and variations of GM between Normal Healthy controls and Patients with Bipolar disorder. A significant amount of GM clusters were obtained in Normal Healthy controls over BP patients on the right precentral gyrus, right anterior cingulate, and the left inferior frontal gyrus. In future research, more published journals could be uploaded onto the database and another VBM meta-analysis could be performed including more activation coordinates or a variation of age groups.

  4. Chemistry of magmatic gases from Erta'Ale, Ethiopia

    Energy Technology Data Exchange (ETDEWEB)

    Giggenbach, W.F. (Dept. of Scientific and Industrial Research, Petone, New Zealand); Le Guern, F.

    1976-01-01

    The chemical composition of the gases emitted from a hornito close to the active lava lake of Erta'Ale, Ethiopia, as derived from chemical analyses on 18 samples collected on 23 January 1974, was found to be (in mol-percent): H/sub 2/O: 79.4; CO/sub 2/: 10.4, total S: 7.36, HCl: 0.42, H/sub 2/: 1.49, N/sub 2/: 0.18, Ar: 0.001, CO: 0.46, and COS: 0.009. Thermodynamic considerations, based on the equilibria CO/sub 2/ + H/sub 2/ reversible CO + H/sub 2/O and CO/sub 2/ + 3H/sub 2/ + SO/sub 2/ reversible COS + 3H/sub 2/O show that the analytical values represent the equilibrium composition of a gas mixture at the measured temperature of around 1130/sup 0/C under close to the surface pressure conditions. Comparison of the Erta'Ale gas emissions with those from other volcanoes suggests a close similarity in their chemical composition. This similarity is considered to be due to common processes governing the release of gaseous species from a magma.

  5. Control de brazo electrónico usando señales electromiográficas

    Directory of Open Access Journals (Sweden)

    Jorge Andrés García-Pinzón

    2015-05-01

    Full Text Available Los trabajos enfocados en la extracción de patrones en señales electromiográficas (SEMG han venido creciendo debido a sus múltiples aplicaciones. En este artículo se presenta una aplicación en la cual se implementa un sistema electrónico para el registro de las SEMG de la extremidad superior en un sujeto, con el fin de controlar de forma remota un brazo electrónico. Se realizó una etapa de preprocesamiento de las señales registradas, para eliminar información poco relevante, y reconocimiento de zonas de interés; enseguida se extraen los patrones y se clasifican. Las técnicas utilizadas fueron: análisis wavelet (AW, análisis de componentes principales (ACP, transformada de fourier (TF, transformada del coseno discreta (TDC, energía, máquinas de soporte vectorial (MSV o SVM y redes neuronales (RNA. En este artículo se demuestra que la metodología planteada permite realizar un proceso de clasificación con un rendimiento superior al 95%. Se registraron más de 4000 señales.

  6. A DETERMINAÇÃO DOS PRODUTOS AVANÇADOS DE GLICAÇÃO (AGES E DE LIPOXIDAÇÃO (ALES EM ALIMENTOS E EM SISTEMAS BIOLÓGICOS: AVANÇOS, DESAFIOS E PERSPECTIVAS

    Directory of Open Access Journals (Sweden)

    Júnia H. Porto Barbosa

    2016-06-01

    Full Text Available Advanced glycation (AGEs and lipoxidation (ALEs products are formed through specific condensation reactions between nucleophiles (amino groups of free amino acids or their residues in peptides, aminophospholipids or proteins and electrophiles (carbonyls of reducing sugars, oxidized lipids or others generating well-defined sets of covalent adducts. The ε-amino group of the lysine is the most reactive precursor in proteins and the primary target of carbohydrate attacks. AGEs/ALEs accumulation has consequences in the development of vascular, renal, neural and ocular complications, as well as in the triggering of inflammatory and neurodegenerative diseases. Therefore, AGEs/ALEs detection, quantification and, in some cases, the assessment of the extent of glycation in biomolecules of different matrices represent a factor of primary interest for science. Reliable analytical methods are thus required. Together with basic concepts, this review presents the main advances, challenges and prospects of research involving AGEs and ALEs in biological and food systems, exploring practical strategies to ensure greater reliability in the analysis of these compounds in different matrices.

  7. El uso de señales en el análisis de coyuntura

    Directory of Open Access Journals (Sweden)

    Eduardo Espinoza

    2012-07-01

    Full Text Available Resumen El análisis y seguimiento de la coyuntura económica resultan ser unas de las principales tareas de las autoridades monetarias y organismos internacionales, de tal manera que las herramientas inherentes a esta tarea deben ser utilizadas en forma apropiada y tener una correcta interpretación. Dentro de este contexto el uso de señales extraídas del comportamiento de las series económicas resulta frecuente en los diversos análisis de coyuntura realizados por los bancos centrales de Centroamérica y República Dominicana. Este documento somete a revisión las principales señales utilizadas en los informes de esos análisis, comparando las ventajas en el uso de tasas de variación mensual, interanual, acumulada, anualizada, entre otras. Todo esto en preámbulo a la recomendación sobre las señales adecuadas para el seguimiento de dos variables fundamentales en la coyuntura económica: los precios y la producción. Palabras Claves: Análisis de coyuntura, extracción de señales, tasas de crecimiento interanual, tasa de crecimiento anualizado. Abstract Analysis and monitoring of any economic situation are the primary tasks of monetary authorities and international organizations and, as such, the tools inherent to this task require appropriate use and interpretation. With this context in mind, the use of extracted behavioral signs in an economic series is common in several conjunctive analyses performed by the central banks in Central America and the Dominican Republic. This document is subject to review of the main signals used in these reports comparing, for example, the advantages in using monthly, inter-annual, cumulative or annualized rates of change, among others. All this is in the preamble to the final recommendation of the appropriate signals to the monitoring of two key variables in the economic conjuncture: prices and production. Keywords: conjuncture analysis, signal extraction, interannual growth rates, annualized growth

  8. Stability profile of flavour-active ester compounds in ale and lager ...

    African Journals Online (AJOL)

    User

    2013-01-30

    Jan 30, 2013 ... Currently, one of the main quality problems of beer is the change of its chemical composition during storage, which alters its sensory properties. In this study, ale and lager beers were produced and aged for three months at two storage temperatures. Concentration of volatile ester compounds (VECs) in the.

  9. Dispositivo de asistencia a discapacitados motores: switch controlado por señales electromiográficas

    OpenAIRE

    Haberman, Marcelo; Spinelli, Enrique Mario

    2013-01-01

    En el presente trabajo, se propone la utilización de señales de EMG. Estas señales tienen su origen en los potenciales eléctricos (potenciales de acción) que se desarrollan en las membranas de las fibras musculares al intentar contraer un músculo. La detección de una contracción muscular, por más débil que sea, permite establecer una vía de comunicación alternativa entre un usuario y su entorno. En este sentido, se ha desarrollo de un sistema completo que, a partir de la detección de una cont...

  10. Computational methods for predicting the response of critical as-built infrastructure to dynamic loads (architectural surety)

    Energy Technology Data Exchange (ETDEWEB)

    Preece, D.S.; Weatherby, J.R.; Attaway, S.W.; Swegle, J.W.; Matalucci, R.V.

    1998-06-01

    Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.

  11. Diversity and abundance of communities of birds associated to forests semideciduos and pine encino of the National Park Viñales

    Directory of Open Access Journals (Sweden)

    Sael Hanoi Pérez Báez

    2016-06-01

    Full Text Available The present work was carried out in the months of February to April 2009 in the forest semideciduo of the path "Marvel of Viñales" and the formation pine-encino of the Valley Ancón of the National Park Viñales and it pursued as main objective to evaluate the diversity and abundance of the communities of birds and its association grade with both formations. The method of circular parcels of fixed radio was used in 30 points of counts separated to 150 m one of other and for the study of vegetation he/she took like base the methodology proposed by James and Shugart (1970 and Noon (1981 with adaptations, he/she took state fenológico of the vegetable species and they measured different variables of the formation boscosa. They were detected a total of 44 species of birds for the semidesiduo and 42 in Ancón. He/she was association between several species of birds and vegetables of the formations in study, appreciating you increment of S with the Relative Abundance and the decrease of the height of the vegetation with the vegetable density. The communities of birds of the formation of forest semideciduo of the path "Marvels of Viñales" and of the forest of pine encino of "Valley Ancón" presented similar figures of wealth, diversity and equitatividad but they sustained differences in composition and it structures. In both study formations numeric dominancias of Turdus plumbeus and Vireo altiloquus registered and the difference was given by the abundance of Teretistris fernandinae in "Marvels of Viñales" and Tiaris canorus in Valley Ancón. The relationship was demonstrated between ornitocenosis and fitocenosis and several species of birds they associated in more measure to rosy Clusea, Callophilum antillanun, Cuban Quercus, Matayba oppositifolia and Cordovan leathers.

  12. An Adaptive Laboratory Evolution Method to Accelerate Autotrophic Metabolism

    DEFF Research Database (Denmark)

    Zhang, Tian; Tremblay, Pier-Luc

    2018-01-01

    Adaptive laboratory evolution (ALE) is an approach enabling the development of novel characteristics in microbial strains via the application of a constant selection pressure. This method is also an efficient tool to acquire insights on molecular mechanisms responsible for specific phenotypes. ALE...... autotrophically and reducing CO2 into acetate more efficiently. Strains developed via this ALE method were also used to gain knowledge on the autotrophic metabolism of S. ovata as well as other acetogenic bacteria....

  13. Sensitivity of Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations

    Science.gov (United States)

    2016-06-12

    Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations Venkatesh Babu, Kumar Kulkarni, Sanjay...buried in soil viz., (1) coupled discrete element & particle gas methods (DEM-PGM) and (2) Arbitrary Lagrangian-Eulerian (ALE), are investigated. The...DEM_PGM and identify the limitations/strengths compared to the ALE method. Discrete Element Method (DEM) can model individual particle directly, and

  14. The Relationship between Language Functions and Character Types in "Noon- Valghalam" by Jalal-Ale-Ahmad

    Directory of Open Access Journals (Sweden)

    Dr. S. A. Parsa

    Full Text Available Making harmony among language functions of story characters with their character types, is one of the characteristics and advantages of modern and successful story writing. In traditional storied literature in Iran (prose and verse, this point is not considered important and story characters, generally, tell in narrator or writers way of speaking and since there is the narrators statement, they are not the representativeness of their class and character type. Not paying attention to this subject, causes disorder in either making supposition of reality or personifying, which are both important principals of story telling. This study, identifies the story of " Noon Val Ghalam" of Jalal- Ale- Ahmad who is a contemporary writer aspect. The methodology is qualitative, and data collection is based on content–analysis and document- analysis. As Ale- Ahmad was one of the Iranian contemporary writers and was familiar with western and Iranian writers, he was expected that the language and way of describing story characters he made, be based on the social class. But this study, by stating different proofs, shows that, this writer ignores the relationship necessary for language functions and character type among characters in the story and because of the imposition of his knowledge, statement and political and social view, the independence of the protagonists in his story is not well-concidered. The inflection of political and social thoughts of each writer among his works, is not a shortfall by it self, but representing of speeches in protagonists, in the way which is not in harmony with their characters, lowering the importance of then is based or an instrument for specific social and political representatives. This action not only shows the character. The specific characters, but also disorders the processing of one important issue in story conversation. Since in each language people from different social groups, use almost the same vocabularies that

  15. Flocculation in ale brewing strains of Saccharomyces cerevisiae: re-evaluation of the role of cell surface charge and hydrophobicity.

    Science.gov (United States)

    Holle, Ann Van; Machado, Manuela D; Soares, Eduardo V

    2012-02-01

    Flocculation is an eco-friendly process of cell separation, which has been traditionally exploited by the brewing industry. Cell surface charge (CSC), cell surface hydrophobicity (CSH) and the presence of active flocculins, during the growth of two (NCYC 1195 and NCYC 1214) ale brewing flocculent strains, belonging to the NewFlo phenotype, were examined. Ale strains, in exponential phase of growth, were not flocculent and did not present active flocculent lectins on the cell surface; in contrast, the same strains, in stationary phase of growth, were highly flocculent (>98%) and presented a hydrophobicity of approximately three to seven times higher than in exponential phase. No relationship between growth phase, flocculation and CSC was observed. For comparative purposes, a constitutively flocculent strain (S646-1B) and its isogenic non-flocculent strain (S646-8D) were also used. The treatment of ale brewing and S646-1B strains with pronase E originated a loss of flocculation and a strong reduction of CSH; S646-1B pronase E-treated cells displayed a similar CSH as the non-treated S646-8D cells. The treatment of the S646-8D strain with protease did not reduce CSH. In conclusion, the increase of CSH observed at the onset of flocculation of ale strains is a consequence of the presence of flocculins on the yeast cell surface and not the cause of yeast flocculation. CSH and CSC play a minor role in the auto-aggregation of the ale strains since the degree of flocculation is defined, primarily, by the presence of active flocculins on the yeast cell wall.

  16. ALE Meta-Analysis of Schizophrenics Performing the N-Back Task

    Science.gov (United States)

    Harrell, Zachary

    2010-10-01

    MRI/fMRI has already proven itself as a valuable tool in the diagnosis and treatment of many illnesses of the brain, including cognitive problems. By exploiting the differences in magnetic susceptibility between oxygenated and deoxygenated hemoglobin, fMRI can measure blood flow in various regions of interest within the brain. This can determine the level of brain activity in relation to motor or cognitive functions and provide a metric for tissue damage or illness symptoms. Structural imaging techniques have shown lesions or deficiencies in tissue volumes in schizophrenics corresponding to areas primarily in the frontal and temporal lobes. These areas are currently known to be involved in working memory and attention, which many schizophrenics have trouble with. The ALE (Activation Likelihood Estimation) Meta-Analysis is able to statistically determine the significance of brain area activations based on the post-hoc combination of multiple studies. This process is useful for giving a general model of brain function in relation to a particular task designed to engage the affected areas (such as working memory for the n-back task). The advantages of the ALE Meta-Analysis include elimination of single subject anomalies, elimination of false/extremely weak activations, and verification of function/location hypotheses.

  17. Soil Sampling to Demonstrate Compliance with Department of Energy Radiological Clearance Requirements for the ALE Unit of the Hanford Reach National Monument

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Brad G.; Dirkes, Roger L.; Napier, Bruce A.

    2007-04-01

    The Hanford Reach National Monument consists of several units, one of which is the Fitzner/Eberhardt Arid Lands Ecology Reserve (ALE) Unit. This unit is approximately 311 km2 of shrub-steppe habitat located to the south and west of Highway 240. To fulfill internal U. S. Department of Energy (DOE) requirements prior to any radiological clearance of land, DOE must evaluate the potential for residual radioactive contamination on this land and determine compliance with the requirements of DOE Order 5400.5. Historical soil monitoring conducted on ALE indicated soil concentrations of radionuclides were well below the Authorized Limits. However, the historical sampling was done at a limited number of sampling locations. Therefore, additional soil sampling was conducted to determine if the concentrations of radionuclides in soil on the ALE Unit were below the Authorized Limits. This report contains the results of 50 additional soil samples. The 50 soil samples collected from the ALE Unit all had concentrations of radionuclides far below the Authorized Limits. The average concentrations for all detectable radionuclides were less than the estimated Hanford Site background. Furthermore, the maximum observed soil concentrations for the radionuclides included in the Authorized Limits would result in a potential annual dose of 0.14 mrem assuming the most probable use scenario, a recreational visitor. This potential dose is well below the DOE 100-mrem per year dose limit for a member of the public. Spatial analysis of the results indicated no observable statistically significant differences between radionuclide concentrations across the ALE Unit. Furthermore, the results of the biota dose assessment screen, which used the ResRad Biota code, indicated that the concentrations of radionuclides in ALE Unit soil pose no significant health risk to biota.

  18. Development of a multimaterial, two-dimensional, arbitrary Lagrangian-Eulerian mesh computer program

    International Nuclear Information System (INIS)

    Barton, R.T.

    1982-01-01

    We have developed a large, multimaterial, two-dimensional Arbitrary Lagrangian-Eulerian (ALE) computer program. The special feature of an ALE mesh is that it can be either an embedded Lagrangian mesh, a fixed Eulerian mesh, or a partially embedded, partially remapped mesh. Remapping is used to remove Lagrangian mesh distortion. This general purpose program has been used for astrophysical modeling, under the guidance of James R. Wilson. The rationale behind the development of this program will be used to highlight several important issues in program design

  19. Transformadas Wavelet impacto fundamental en Procesamiento de señales y compresión de imágenes

    OpenAIRE

    González Arboleda, José Rodrigo

    2014-01-01

    En este trabajo se mostrará el concepto de Transformadas de Fourier y Transformadas Wavelet, sus propiedades, teoremas, proposiciones que las soportan. Se mostrará soluciones en series de Fourier que representan a las soluciones de procesamiento de señales y la Transformada Wavelet que permite la compresión de datos e imágenes. En general, el procesamiento de señales y en particular, la compresión de imágenes es una de las necesidades más importantes en aplicaciones como la codificación y ...

  20. ADQUISICIÓN Y PROCESAMIENTO DE SEÑALES EMG PARA CONTROLAR MOVIMIENTO DE UN BRAZO HIDRAULICO

    Directory of Open Access Journals (Sweden)

    Jorge Andrés García Pinzon

    2014-06-01

    Full Text Available En este artículo se presenta el diseño e implementación de un sistema electrónico para el registro de las señales electromiográficas de la extremidad superior del sujeto (humano. Seguidamente al proceso de la implementación del sistema electrónico, en este trabajo se realiza una etapa de pre-procesamiento y procesamiento de las señales registradas, las técnicas utilizadas para éste fin son: análisis wavelet (AW, análisis de componentes principales (ACP, transformada de fourier (TF, transformada del coseno discreta (DCT, máquinas de soporte vectorial (SVM y redes neuronales artificiales (RNA; estas técnicas se usaron para eliminar información poco relevante, reconocer zonas de interés, extraer patrones en cada grupo de señales y clasificar una nueva señal que controle en forma precisa el movimiento que quiere ejecutar el sujeto con el brazo Hidráulico. Dentro de las técnicas de control de procesos Industriales se busca realizar una aplicación con el fin de poder hacer control a dos grados de libertad más el efector final del brazo hidráulico del laboratorio de automatización y mantenimiento de equipos industriales de la Universidad de Pamplona.

  1. La dialyse péritonéale chez les patients de moins de vingt ans ...

    African Journals Online (AJOL)

    La dialyse péritonéale chez les patients de moins de vingt ans: expérience d'un centre hospitalier universitaire marocain. Intissar Haddiya, Hakima Rhou, Fatima Ezaitouni, Naima Ouzeddoun, Rabia Bayahia, Loubna Benamar ...

  2. Magmatic architecture within a rift segment: Articulate axial magma storage at Erta Ale volcano, Ethiopia

    Science.gov (United States)

    Xu, Wenbin; Rivalta, Eleonora; Li, Xing

    2017-10-01

    Understanding the magmatic systems beneath rift volcanoes provides insights into the deeper processes associated with rift architecture and development. At the slow spreading Erta Ale segment (Afar, Ethiopia) transition from continental rifting to seafloor spreading is ongoing on land. A lava lake has been documented since the twentieth century at the summit of the Erta Ale volcano and acts as an indicator of the pressure of its magma reservoir. However, the structure of the plumbing system of the volcano feeding such persistent active lava lake and the mechanisms controlling the architecture of magma storage remain unclear. Here, we combine high-resolution satellite optical imagery and radar interferometry (InSAR) to infer the shape, location and orientation of the conduits feeding the 2017 Erta Ale eruption. We show that the lava lake was rooted in a vertical dike-shaped reservoir that had been inflating prior to the eruption. The magma was subsequently transferred into a shallower feeder dike. We also find a shallow, horizontal magma lens elongated along axis inflating beneath the volcano during the later period of the eruption. Edifice stress modeling suggests the hydraulically connected system of horizontal and vertical thin magmatic bodies able to open and close are arranged spatially according to stresses induced by loading and unloading due to topographic changes. Our combined approach may provide new constraints on the organization of magma plumbing systems beneath volcanoes in continental and marine settings.

  3. Optimization studies of HgSe thin film deposition by electrochemical atomic layer epitaxy (EC-ALE)

    CSIR Research Space (South Africa)

    Venkatasamy, V

    2006-06-01

    Full Text Available Studies of the optimization of HgSe thin film deposition using electrochemical atomic layer epitaxy (EC-ALE) are reported. Cyclic voltammetry was used to obtain approximate deposition potentials for each element. These potentials were then coupled...

  4. History of the pharmacies in the town of Aleşd, Bihor county.

    Science.gov (United States)

    Paşca, Manuela Bianca; Gîtea, Daniela; Moisa, Corina

    2013-01-01

    In 1848 pharmacist Horváth Mihály established the first pharmacy in Aleşd, called Speranţa (Remény). Following the brief history of this pharmacy we will notice that in 1874 the pharmacy comes into the possession of Kocsiss József. In 1906 the personal rights of the pharmacy are transcribed to Kocsiss Béla, and since 1938 the his son, Kocsiss Dezső, pharmacist, became the new owner. In 1949 the pharmacy was nationalized and became the property of the Pharmaceutical Office Oradea, the pharmacy got the name Farmacia nr. 22 of Aleşd, and continued its activity throughout the whole communist period. Starting with the year 1991 it entered into private system as Angefarm, as the property of Mermeze Gheorghe, pharmacist, and from 2003 until now works under the name Vitalogy 3, as the property of Ghitea Sorin. A second pharmacy, Sfântul Anton was founded in 1937 by pharmacist Herceg Dobreanu Atena, which however had no continuity during the communist period.

  5. SALE-3D, 3-D Fluid Flow, Navier Stokes Equation Using Lagrangian or Eulerian Method

    International Nuclear Information System (INIS)

    Amsden, A.A.; Ruppel, H.M.

    1991-01-01

    1 - Description of problem or function: SALE-3D calculates three- dimensional fluid flows at all speeds, from the incompressible limit to highly supersonic. An implicit treatment of the pressure calculation similar to that in the Implicit Continuous-fluid Eulerian (ICE) technique provides this flow speed flexibility. In addition, the computing mesh may move with the fluid in a typical Lagrangian fashion, be held fixed in an Eulerian manner, or move in some arbitrarily specified way to provide a continuous rezoning capability. This latitude results from use of an Arbitrary Lagrangian-Eulerian (ALE) treatment of the mesh. The partial differential equations solved are the Navier-Stokes equations and the mass and internal energy equations. The fluid pressure is determined from an equation of state and supplemented with an artificial viscous pressure for the computation of shock waves. The computing mesh consists of a three-dimensional network of arbitrarily shaped, six-sided deformable cells, and a variety of user-selectable boundary conditions are provided in the program. 2 - Method of solution: SALE3D uses an ICED-ALE technique, which combines the ICE method of treating flow speeds and the ALE mesh treatment to calculate three-dimensional fluid flow. The finite- difference approximations to the conservation of mass, momentum, and specific internal energy differential equations are solved in a sequence of time steps on a network of deformable computational cells. The basic hydrodynamic part of each cycle is divided into three phases: (1) an explicit solution of the Lagrangian equations of motion updating the velocity field by the effects of all forces, (2) an implicit calculation using Newton-Raphson iterative scheme that provides time-advanced pressures and velocities, and (3) the addition of advective contributions for runs that are Eulerian or contain some relative motion of grid and fluid. A powerful feature of this three-phases approach is the ease with which

  6. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  7. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  8. Bound-Preserving Reconstruction of Tensor Quantities for Remap in ALE Fluid Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Klima, Matej [Czech Technical Univ. in Prague, Praha (Czech Republic); Kucharik, MIlan [Czech Technical Univ. in Prague, Praha (Czech Republic); Shashkov, Mikhail Jurievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Velechovsky, Jan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    We analyze several new and existing approaches for limiting tensor quantities in the context of deviatoric stress remapping in an ALE numerical simulation of elastic flow. Remapping and limiting of the tensor component-by-component is shown to violate radial symmetry of derived variables such as elastic energy or force. Therefore, we have extended the symmetry-preserving Vector Image Polygon algorithm, originally designed for limiting vector variables. This limiter constrains the vector (in our case a vector of independent tensor components) within the convex hull formed by the vectors from surrounding cells – an equivalent of the discrete maximum principle in scalar variables. We compare this method with a limiter designed specifically for deviatoric stress limiting which aims to constrain the J2 invariant that is proportional to the specific elastic energy and scale the tensor accordingly. We also propose a method which involves remapping and limiting the J2 invariant independently using known scalar techniques. The deviatoric stress tensor is then scaled to match this remapped invariant, which guarantees conservation in terms of elastic energy.

  9. Les bienfaits pour la santé et la prédominance du sucre dans les céréales pour déjeuner destinées aux enfants au Canada

    Directory of Open Access Journals (Sweden)

    Monique Potvin Kent

    2017-01-01

    Full Text Available Introduction : Cette étude vise à comparer le contenu nutritionnel et les bienfaits pour la santé des céréales pour déjeuner destinées aux enfants et celles non destinées aux enfants et à évaluer la prédominance du sucre ajouté dans ces produits. Méthodologie : Nous avons recueilli des données sur le contenu nutritionnel de 262 céréales pour déjeuner vendues dans les cinq principales chaînes d’alimentation à Ottawa (Ontario et à Gatineau (Québec. Pour chaque céréale, nous avons pris en note les cinq premiers ingrédients et la quantité de sucres ajoutés indiqués sur la liste des ingrédients. Les diverses marques de céréales ont été ensuite classées en deux catégories, soit « plus saines » ou « moins saines », à l’aide du modèle de profil nutritionnel du Royaume-Uni. Nous avons évalué chaque céréale en fonction de divers critères afin de déterminer si elle était destinée ou non aux enfants. Des comparaisons statistiques ont été établies entre les céréales destinées aux enfants et les autres. Résultats : Sur l'ensemble des céréales pour déjeuner, 19,8 % étaient destinées aux enfants et contenaient beaucoup moins de gras et de gras saturés. Ces céréales avaient une teneur en sodium et en sucre significativement plus élevée et une teneur en fibres et en protéines plus faible, et elles étaient trois fois plus susceptibles d’être classées comme « moins saines » par rapport aux céréales non destinées aux enfants. Aucune des céréales destinées aux enfants n’était sans sucre et, pour 75 % d’entre elles, le sucre occupait le deuxième rang dans la liste des ingrédients. Six entreprises de céréales pour déjeuner possédaient une gamme de produits destinés aux enfants composée entièrement de céréales « moins saines ». Conclusion : Il est nécessaire d’adopter un règlement qui limite le marketing alimentaire ciblant les enfants et les jeunes de moins de 17

  10. The impact of different ale brewer’s yeast strains on the proteome of immature beer

    DEFF Research Database (Denmark)

    Berner, Torben Sune; Jacobsen, Susanne; Arneborg, Nils

    2013-01-01

    BACKGROUND: It is well known that brewer’s yeast affects the taste and aroma of beer. However, the influence of brewer’s yeast on the protein composition of beer is currently unknown. In this study, changes of the proteome of immature beer, i.e. beer that has not been matured after fermentation......, by ale brewer’s yeast strains with different abilities to degrade fermentable sugars were investigated. RESULTS: Beers were fermented from standard hopped wort (13° Plato) using two ale brewer’s yeast (Saccharomyces cerevisiae) strains with different attenuation degrees. Both immature beers had the same....... These three proteins, all derived from yeast, were identified as cell wall associated proteins, that is Exg1 (an exo-β-1,3-glucanase), Bgl2 (an endo-β-1,2-glucanase), and Uth1 (a cell wall biogenesis protein). CONCLUSION: Yeast strain dependent changes in the immature beer proteome were identified, i.e. Bgl2...

  11. Diseño y construcción de un módulo didáctico para la adquisición y análisis de señales ECG, EEG y EMG

    OpenAIRE

    Valencia Brito, Efraín Issrael; Villa Parra, Flavio Fernando

    2013-01-01

    En este documento se describe el desarrollo de un módulo didáctico que permite la adquisición y presentación de la información de las señales ECG, EEG y EMG para el desarrollo de prácticas que se espera aporte a la enseñanza y estudio de las bioseñales a estudiantes de Ingeniería. Para este fin se aplicaron conceptos de adquisición, digitalización, visualización y análisis de señales a nivel de hardware y de software. Las señales se adquieren con electrodos superficiales y para el caso de ...

  12. The Montana ALE (Autonomous Lunar Excavator) Systems Engineering Report

    Science.gov (United States)

    Hull, Bethanne J.

    2012-01-01

    On May 2 1-26, 20 12, the third annual NASA Lunabotics Mining Competition will be held at the Kennedy Space Center in Florida. This event brings together student teams from universities around the world to compete in an engineering challenge. Each team must design, build and operate a robotic excavator that can collect artificial lunar soil and deposit it at a target location. Montana State University, Bozeman, is one of the institutions selected to field a team this year. This paper will summarize the goals of MSU's lunar excavator project, known as the Autonomous Lunar Explorer (ALE), along with the engineering process that the MSU team is using to fulfill these goals, according to NASA's systems engineering guidelines.

  13. Diseño y desarrollo de un sistema para la adquisición de señales de electrocardiografía (ECG)

    OpenAIRE

    BASELGA MEMBRIVE, MARÍA

    2015-01-01

    [ES] Las células excitables de nuestro organismo generan unas señales eléctricas muy importantes (centenares de milivoltios) que, sin embargo, llegan muy atenuadas (microvoltios o pocos milivoltios) a la superficie del cuerpo humano debido a que internamente somos muy buenos conductores. La utilización de señales bioeléctricas como el electrocardiograma ECG, electroencefalograma EEG o electromiograma EMG, entre otras, se ha convertido en una rutina clínica para el diagnóstico de diferentes pa...

  14. Aspecte morfometrice ale meiocitelor şi grăuncioarelor de polen la plantele de floarea–soarelui cu androsterilitate indusă

    Directory of Open Access Journals (Sweden)

    Victoria NECHIFOR

    2017-12-01

    Full Text Available Morphometric characterization is an important element in the study of dynamic cellular behavior in plant responses to biotic and abiotic stimuli. The aim of study was to determine the morphometric parameters of meiocytes and pollen grains in different phases of microsporogenesis in fertile and induced androsterility sunflower plants. Thus, the comparative morphometric analysis revealed the gametocidal effect of gibberellin that is manifested as abnormal changes in shape and volume of cells. These modifications lead to deficiencies in the integrity/rigidity and, respectively, in the functionality of the cell wall, as well as in the physical properties of the protoplasm. The low values of morphometric parameters were also correlated with the degree of sterility of pollen grains. Rezumat. Caracterizarea morfometrică este un element important în studiul proceselor celulare dinamice ca răspuns la stimulii biotici şi abiotici. Scopul studiului constă în determinarea parametrilor morfometrici ai meiocitelor şi grăuncioarelor de polen în diferite faze ale microsporogenezei la plantele de floarea-soarelui fertile şi cu androsterilitate indusă. Astfel, analiza morfometrică comparativă a relevat efectul gametocid al giberelinei exprimat prin schimbările anormale ale formei şi volumului celulelor. Aceste modificări induc deficienţe atât în integritatea/rigiditatea şi, respectiv, în funcţionalitatea peretelui celular, cât şi în proprietăţile fizice ale protoplasmei. De asemenea, valorile scăzute ale parametrilor morfometrici au fost corelate cu gradul de sterilitate al grăuncioarelor de polen.

  15. BOOK REVIEW - Adrian Liviu Ivan, Teorii și practice ale integrării europene [Theories and Practices of European Integration

    Directory of Open Access Journals (Sweden)

    Adrian Daniel STAN

    2016-06-01

    Full Text Available Teorii și practice ale integrării europene [Theories and Practices of European Integration] is a genuine contribution to understanding how the European Union’s particular character has been adjusted in more than half of century of institutional growth and development. Professor Ivan`s key argument for this book is that the European Union has been shaped as a functional project taking into consideration the diverse heritage and traditions of its Member States. The opening chapter of this book focuses on the particularities of international relations discipline after the Second World War in order to introduce the theme of European Integration Process. This chapter must be analysed in a series of contributions dedicated to the European Integration process and to the theories that made this integration possible because professor Ivan has previously published books such as: Statele Unite ale Europei [The United States of Europe], Sub zodia Statelor Unite ale Europei [Under the Sign of the United States of Europe], in which he debates the origins of the European construction and brings forward arguments to support the importance of each theoretical and functional pillar of this “Common European Project”.

  16. Estudio de técnicas de análisis y clasificación de señales EEG en el contexto de sistemas BCI (Brain Computer Interface)

    OpenAIRE

    Henríquez Muñoz, Claudia Nureibis

    2014-01-01

    Máster universitario en Investigación e Innovación en TIC. Las Interfaces Cerebro Computador (BCI) son una tecnología basada en la adquisición y procesamiento de señales cerebrales para el control de diversos dispositivos. Su objetivo principal es proporcionar un nuevo canal de salida al cerebro del usuario que requiere un control adaptativo voluntario. Usualmente los BCI se enfocan en reconocer eventos que son adquiridos por métodos como el Electroencefalograma (EEG). Dicho...

  17. VALORI POSTEXPERIMENTALE ALE CULTURII MANAGERIALE ÎN INSTITUŢIA PREŞCOLARĂ

    Directory of Open Access Journals (Sweden)

    Victoria COJOCARU

    2015-12-01

    Full Text Available Prezentul articol reflectă analiza datelor experimentului de control la tema: Valoarea metodologică a culturii mana-geriale în instituţia preşcolară, unde sunt prezentate pe niveluri rezultatele comparative ale grupelor de manageri şi masteranzi implicaţi în experimentul formativ. THE POSTEXPERIMENTAL VALUES OF MANAGERIAL CULTURE IN THE PRESCHOOL INSTITUTIONThis article reflects the analysis control experiment data of subjects The methodological value of managerial culture in preschool, where are presented comparative results of managers and masters by levels of evidence developed at this stage.

  18. Kas erivajadustega lapsed saavad õigel ajal abi? / Ene Mägi, Urve Raudsepp-Alt, Ale Sprenk, Peeter Aas

    Index Scriptorium Estoniae

    2009-01-01

    Küsimusele vastavad: Tallinna Ülikooli Kasvatusteaduste Instituudi eri- ja sotsiaalpedagoogika osakonna juhataja Ene Mägi, Tallinna Haridusameti üldhariduse osakonna peaspetsialist Urve Raudsepp-Alt, Krabi põhikooli direktor Ale Sprenk, Põlva Maavalitsuse haridus-, kultuuri- ja sotsiaalosakonna juhataja Peeter Aas

  19. Vajilla de mesa (terra sigillata y cerámica engobada de la ciudad romana de Los Bañales (Uncastillo, Zaragoza = Roman Pottery (terra sigillata and engobada at the archaeological site of Los Bañales (Uncastillo, Zaragoza

    Directory of Open Access Journals (Sweden)

    Elena Lasaosa Pardo

    2014-12-01

    Full Text Available En el presente trabajo se presenta la vajilla de mesa (terra sigillata y cerámica engobada que consta entre los materiales cerámicos de la ciudad romana de Los Bañales (Uncastillo, Zaragoza, recuperados en los años setenta del siglo pasado por Antonio Beltrán Martínez. Con este trabajo se pretende aportar datos que ayuden en el estudio de la antigua ciudad ubicada en el lugar y enriquezcan la investigación que hay en ella y que ha sido retomada en los últimos años para el mejor conocimiento, conservación, puesta en valor y difusión de la ciudad.In the following pages, the material culture results, specifically terra sigillata and engobada pottery, obtained from a series of archaeological interventions undertaken in the 1970’s by the lecturer Antonio Beltrán, at the archeological site of Los Bañales (Uncastillo, Zaragoza, are presented. The aim of this article is to increase the information available on these ceramics and to provide a contribution towards a greater understanding of the historical city of Los Bañales. This study will also assist recent archaeological investigations on this historic site, by providing better information on the form, chronology and use of these ceramics and by communicating the results of this study to both the researchers and the interested public. It will also contribute to the culture heritage of Uncastillo through the conservation of the ceramic evidence and the public display of this material culture.

  20. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    Science.gov (United States)

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  1. Trayectoria religiosa de un clérigo español a principios del siglo xix. La figura de Rafael Crisanto Alesón

    Directory of Open Access Journals (Sweden)

    Rebeca Viguera Ruiz

    2015-01-01

    Full Text Available In the Spanish situation of major transformations that occurred between the eighteenth and nineteenth century, citizens of all social classes and professional occupations were the true architects of all cultural, ideological and political changes that took place. In an attempt to better understand the essence of some of these popular and religious social components, this paper aims to briefly present the most relevant biographical notes about Rafael D. Crisanto Alesón Alesón as a first approach to his life. Alesón was a character who lived between those two centuries which meant the emergence of the Enlightenment firstly and liberalism thereafter, that would eventually overthrow the Old Regime monarchy. A man of the Church, well educated, who enjoyed a comfortable economic status that gave him a good education and high social consideration from his neighbors. These pages seek to define this religious figure who, from a local level and a humble exercise of his religious ministry, helped to spread the message of the Catholic faith at a time of great stress and generalized crisis in Spain.

  2. Hypnosis and pain perception: An Activation Likelihood Estimation (ALE) meta-analysis of functional neuroimaging studies.

    Science.gov (United States)

    Del Casale, Antonio; Ferracuti, Stefano; Rapinesi, Chiara; De Rossi, Pietro; Angeletti, Gloria; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo

    2015-12-01

    Several studies reported that hypnosis can modulate pain perception and tolerance by affecting cortical and subcortical activity in brain regions involved in these processes. We conducted an Activation Likelihood Estimation (ALE) meta-analysis on functional neuroimaging studies of pain perception under hypnosis to identify brain activation-deactivation patterns occurring during hypnotic suggestions aiming at pain reduction, including hypnotic analgesic, pleasant, or depersonalization suggestions (HASs). We searched the PubMed, Embase and PsycInfo databases; we included papers published in peer-reviewed journals dealing with functional neuroimaging and hypnosis-modulated pain perception. The ALE meta-analysis encompassed data from 75 healthy volunteers reported in 8 functional neuroimaging studies. HASs during experimentally-induced pain compared to control conditions correlated with significant activations of the right anterior cingulate cortex (Brodmann's Area [BA] 32), left superior frontal gyrus (BA 6), and right insula, and deactivation of right midline nuclei of the thalamus. HASs during experimental pain impact both cortical and subcortical brain activity. The anterior cingulate, left superior frontal, and right insular cortices activation increases could induce a thalamic deactivation (top-down inhibition), which may correlate with reductions in pain intensity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A DEMODULATOR OF PWM SIGNALS GENERATED FOR A DIGITAL ACCELEROMETER IS DEVELOPED USING A MICROCONTROLLER UN DEMODULADOR DE SEÑALES PWM GENERADAS POR UN ACELERÓMETRO DIGITAL ES DESARROLLADO USANDO UN MICROCONTROLADOR

    Directory of Open Access Journals (Sweden)

    Eduardo Pérez Lobato

    2006-08-01

    Full Text Available This paper presents the use of a microcontroller to demodulate two Pulse Width Modulated (PWM signals which are being generated by a digital accelerometer, to obtain their pulse widths and transmit them serially to a parallel port of a general purpose computer.Esta publicación presenta el uso de un microcontrolador para demodular dos señales PWM que están siendo generadas por un acelerómetro digital, obtener sus anchos y enviarlas en forma serial al puerto paralelo de un computador de propósitos generales.

  4. CARACTERISTICILE GENERALE ALE LIMBAJULUI ŞTIINŢIFIC

    Directory of Open Access Journals (Sweden)

    Galina PLEŞCA

    2016-12-01

    Full Text Available Articolul prezintă un studiu al limbajului ştiinţific ca subsistem al limbajului literar, scopul căruia este de a identifica caracteristicile generale ale acestuia. Limbajul ştiinţific se adresează raţiunii şi logicii, raţiona­mentul fiind caracteristica sa de bază, iar funcţia sa de a transmite informaţii ştiinţifice, utilizate pe baza unor raţionamente logice şi deductive, este una dominantă şi serveşte în scopul informării şi educării – piloni de bază ai oricărui text ştiinţific.GENERAL CHARACTERISTICS OF THE SCIENTIFIC LANGUAGEThe article deals with the study of the scientific language as a literary language subsystem, the aim of which is to identify its general characteristics. The scientific language addresses the reason and logic, reasoning being its basic characteristic feature and its function of rendering scientific and utilitarian information based on logical and deductive reasoning is the dominant one and serves the purpose of informing and educating, basic pillars of any scientific text.

  5. Computer simulation of explosion crater in dams with different buried depths of explosive

    Science.gov (United States)

    Zhang, Zhichao; Ye, Longzhen

    2018-04-01

    Based on multi-material ALE method, this paper conducted a computer simulation on the explosion crater in dams with different buried depths of explosive using LS-DYNA program. The results turn out that the crater size increases with the increase of buried depth of explosive at first, but closed explosion cavity rather than a visible crater is formed when the buried depth of explosive increases to some extent. The soil in the explosion cavity is taken away by the explosion products and the soil under the explosion cavity is compressed with its density increased. The research can provide some reference for the anti-explosion design of dams in the future.

  6. Una técnica de pronóstico de señales basada en redes neuronales

    Directory of Open Access Journals (Sweden)

    Jorge Eduardo Ortiz T.

    2001-10-01

    Full Text Available El creciente interés por construir sistemas de procesamiento de señales mediante redes neuronales, especialmente durante los últimos veinte años, se debe a la posibilidad de avances significativos en campos poco explorados y cuyo manejo resulta de naturaleza compleja. Uno de estos campos es la predicción de señales. Una señal es una colección de valores que, generalmente, representan mediciones sucesivas de un evento del mundo real, los valores se toman durante un tiempo específico a intervalos regulares, representando una muestra del evento que se desea estudiar o de las características propias del mismo. La predicción de una señal consiste en obtener un conjunto de valores que, según un margen de error aceptable, constituyen una estimación del comportamiento futuro de la señal. El proceso de predicción es mucho más que la posición simplista de aventurar valores; puesto que requiere la construcción de un modelo adecuado de la dinámica del sistema a tratar.

  7. Validación de señales vibro-acústicas para el diagnóstico de fallas en rodamientos en un generador síncrono

    OpenAIRE

    Zulma Yadira Medrano Hurtado; Carlos Pérez Tello

    2017-01-01

    En este trabajo se describe el procedimiento y las herramientas utilizados en la medición y diagnóstico de señales de vibración capturadas a través de transductores de aceleración (acelerómetros piezoeléctricos) y acústicos (micrófonos omnidireccionales). Además, se desarrolló un arreglo experimental empleando la metodología Taguchi para validar la información registrada de las señales de vibración para rodamientos sin falla y con falla artificial, respectivamente. La falla artificial consist...

  8. Arbitrary Lagrangian-Eulerian method for non-linear problems of geomechanics

    International Nuclear Information System (INIS)

    Nazem, M; Carter, J P; Airey, D W

    2010-01-01

    In many geotechnical problems it is vital to consider the geometrical non-linearity caused by large deformation in order to capture a more realistic model of the true behaviour. The solutions so obtained should then be more accurate and reliable, which should ultimately lead to cheaper and safer design. The Arbitrary Lagrangian-Eulerian (ALE) method originated from fluid mechanics, but has now been well established for solving large deformation problems in geomechanics. This paper provides an overview of the ALE method and its challenges in tackling problems involving non-linearities due to material behaviour, large deformation, changing boundary conditions and time-dependency, including material rate effects and inertia effects in dynamic loading applications. Important aspects of ALE implementation into a finite element framework will also be discussed. This method is then employed to solve some interesting and challenging geotechnical problems such as the dynamic bearing capacity of footings on soft soils, consolidation of a soil layer under a footing, and the modelling of dynamic penetration of objects into soil layers.

  9. Señales entre hongos patógenos y plantas hospederas resistentes

    Directory of Open Access Journals (Sweden)

    G. Camarena Gutiérrez

    2001-01-01

    Full Text Available Los hongos parásitos obligados obtienen sus nutrimentos de células vivas. Durante su ciclo de vida, se forman tres tipos de estructuras intracelulares (hifa de invasión, haustorio M y haustorio D y cada uno puede afectar de manera diferente la membrana de la célula huésped que le rodea, también cómo afectar otros componentes celulares. Cada estructura intracelular también previene que se disparen defensas no específicas de la planta por la actividad del hongo, posiblemente interfiriendo el sistema de señales más que la expresión de defensa.

  10. Sistema de detección de señales de tráfico para la localización de intersecciones viales y frenado anticipado

    Directory of Open Access Journals (Sweden)

    Gabriel Villalón-Sepúlveda

    2017-04-01

    depending on the distance is presented. The method is based on segmentation by color on the RGB-normalized space (ErEgEb for generating regions of interest (ROIs, the classification of the type of signal using YCbCr and ErEgEb spaces to build a statistical template, where to remove the background has proposed a probability distribution function that models the color of the objects of interest and its background. This system is specialized in Stop and Give way signs.After that when compared to the Viola & Jones method where the detection capability is verified at different distances is then analyzed. For distances less than 48 meters, this method reaches a detection rate of 87.5%, in the case of Give way sign and 95.4% regarding the Stop sign, while for distances less than 30 meters the detection rate is 100%. These results are higher than those in the state of the art. Experiments have been developed over a traffic signs database, generated from images taken in several streets of the Santiago, Metropolitan Region, Chile, using an experimental vehicle designed to develop intelligent systems. Palabras clave: Intersección vial, accidentes, señales de tráfico, plantillas estadísticas, distancia, color, Chile, Keywords: Distance, color, road intersection, accidents, traffic signs, statistics templates

  11. Temperature profile data collected from the ALE ANDRO DE HUMBOLDT from 19 September 1971 to 26 September 1971 (NODC Accession 7500942)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profile data were collected using bottle casts from the ALE ANDRO DE HUMBOLDT in the coastal waters of California from 19 September 1971 to 26 September...

  12. Estudio comparativo de técnicas de reducción de ruido en señales industriales mediante Transformada Wavelet Discreta y selección adaptativa del umbral

    Directory of Open Access Journals (Sweden)

    Antonio Cedeño Pozo Ing.,

    2013-04-01

    Full Text Available Résumén: Las técnicas de reducción de ruido son ampliamente utilizadas en la grabación de audio, la edición de imágenes y en el procesamiento de señales industriales. La idea es reconstruir los datos originales a partir de la señal ruidosa suprimiendo toda, o casi toda, la distorsión generada por el ruido inherente a los procesos físicos. En el presente trabajo se realiza una comparación de diferentes métodos de supresión de ruido que se basan en la selección adaptativa del umbral. Estas técnicas han sido usadas extensivamente en el procesamiento de imágenes pero el objetivo de este trabajo es evaluar su rendimiento en la reducción de ruido de señales industriales. En particular se analiza el comportamiento de los métodos Bayes Shrink, Normal Shrink, Modified Shrink y Neight Shrink para la reducción de ruido gaussiano en estas señales. A tales efectos se utilizó un conjunto de señales patrón, que incluye a las señales propuestas por Donoho y otras mediciones representativas obtenidas de procesos reales en las plantas de Níquel cubanas. Las pruebas realizadas revelan que el algoritmo Neigh Shrink es el que mejor se comporta en los datos analizados. Abstract: Noise reduction techniques are widely used for audio recording, image editing, and industrial signal processing. The idea is to reconstruct the original data from the noise-corrupted signal suppressing, all or almost all, the distortion caused by the inherent noise of the physical processes. In the present paper, we perform a comparative review of several noise reduction techniques based on adaptive threshold selection. These techniques have been extensively used for image processing. However, we aimed at evaluating their performance for industrial signal noise reduction. In particular, we analyze the behaviour of the Bayes Shrink, Normal Shrink, Modified Shrink, and Neight Shrink methods for the reduction of the

  13. Adquisición, registro y transmisión en tiempo real de señales sismológicas bajo TCP/IP

    Directory of Open Access Journals (Sweden)

    Vargas-jiménez Carlos A.

    2001-08-01

    Full Text Available

    The constant evolution of network technologies has made possible the development of applications facilitating the real time access to the information. This way, it is possible the implementation of remote monitoring systems that using PCs achieves efficient transmission of signals through computer networks. In this work, the procedures system design and implementation procedures are explain in Client/Server environment that, based on TCP/lP, transmits in real time 16 seismological signals over LAN network and the Internet. The server is a PC endowed with an acquisition data card and it is the one in charge of carrying out the signal Analog/Digital conversion, to store in files those that correspond to seismic events and, at the same time, to assist the applications of the different clients. The client's software allows the users to view in real time the signals that the server is acquiring and to carry out a basicn processing to the signals that have been registered in the server.

    Se presenta el desarrollo de un sistema de monitoreo de variables sísmicas, que incluye la adquisición y adecuación de 16 canales simultáneos, el preproceso, la grabación y la transmisión sobre redes IP en tiempo real. EI sistema esta orientado al sistema Windows, para lo cual se emplearon librerías dinámicas orientadas al acceso de dispositivos en tiempo real. Así mismo, el acceso de datos al PC se realiza vfa DMA, para implementar el proceso múltiple de señales en tiempo real. La arquitectura de la red IP de monitoreo sísmico es cliente/servidor, en la cual se tome como servidor un PC dotado de una tarjeta de adquisición de datos, encargado de realizar la conversión an

  14. El yacimiento arqueológico de Los Bañales (Uncastillo, Zaragoza : ensayo de actualización

    Directory of Open Access Journals (Sweden)

    María Dolores Lasuén Alegre

    2008-01-01

    Full Text Available Los Bañales es una ciudad romana con sustrato indígena ubicada en un área de intensa romanización en el Valle Medio del Ebro, en la Tarraconensis, y que, entre sus muchas peculiaridades, presenta un sistema de abastecimiento y de aprovechamiento hidráulico con acueducto conservado notablemente singular. El estudio que aquí se presenta —que plantea una revisión arqueológica del conjunto— surge como parte de los trabajos preliminares de puesta en valor del yacimiento y de su Plan de Investigación.Los Bañales is a Roman city with indigenous substract, situated in a very intense romanized area in the middle of the Ebro Valley, in Taraconensis province. Between its important singularities, it also presents a supply and hydric use system with a very singular thermae and aqueduct system. This study —that tryes to do a review of the archeological site— comes out as part of a previous and current research-work, related to the rehabilitation of the archaeological site and with its current Researching Plan.

  15. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  16. Finite element methods in incompressible, adiabatic, and compressible flows from fundamental concepts to applications

    CERN Document Server

    Kawahara, Mutsuto

    2016-01-01

    This book focuses on the finite element method in fluid flows. It is targeted at researchers, from those just starting out up to practitioners with some experience. Part I is devoted to the beginners who are already familiar with elementary calculus. Precise concepts of the finite element method remitted in the field of analysis of fluid flow are stated, starting with spring structures, which are most suitable to show the concepts of superposition/assembling. Pipeline system and potential flow sections show the linear problem. The advection–diffusion section presents the time-dependent problem; mixed interpolation is explained using creeping flows, and elementary computer programs by FORTRAN are included. Part II provides information on recent computational methods and their applications to practical problems. Theories of Streamline-Upwind/Petrov–Galerkin (SUPG) formulation, characteristic formulation, and Arbitrary Lagrangian–Eulerian (ALE) formulation and others are presented with practical results so...

  17. The moduli space of instantons on an ALE space from 3d $\\mathcal{N}=4$ field theories

    CERN Document Server

    Mekareeya, Noppadol

    2015-01-01

    The moduli space of instantons on an ALE space is studied using the moduli space of $\\mathcal{N}=4$ field theories in three dimensions. For instantons in a simple gauge group $G$ on $\\mathbb{C}^2/\\mathbb{Z}_n$, the Hilbert series of such an instanton moduli space is computed from the Coulomb branch of the quiver given by the affine Dynkin diagram of $G$ with flavour nodes of unitary groups attached to various nodes of the Dynkin diagram. We provide a simple prescription to determine the ranks and the positions of these flavour nodes from the order of the orbifold $n$ and from the residual subgroup of $G$ that is left unbroken by the monodromy of the gauge field at infinity. For $G$ a simply laced group of type $A$, $D$ or $E$, the Higgs branch of such a quiver describes the moduli space of instantons in projective unitary group $PU(n) \\cong U(n)/U(1)$ on orbifold $\\mathbb{C}^2/\\hat{G}$, where $\\hat{G}$ is the discrete group that is in McKay correspondence to $G$. Moreover, we present the quiver whose Coulomb ...

  18. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  19. Brewhouse-resident microbiota are responsible for multi-stage fermentation of American coolship ale.

    Directory of Open Access Journals (Sweden)

    Nicholas A Bokulich

    Full Text Available American coolship ale (ACA is a type of spontaneously fermented beer that employs production methods similar to traditional Belgian lambic. In spite of its growing popularity in the American craft-brewing sector, the fermentation microbiology of ACA has not been previously described, and thus the interface between production methodology and microbial community structure is unexplored. Using terminal restriction fragment length polymorphism (TRFLP, barcoded amplicon sequencing (BAS, quantitative PCR (qPCR and culture-dependent analysis, ACA fermentations were shown to follow a consistent fermentation progression, initially dominated by Enterobacteriaceae and a range of oxidative yeasts in the first month, then ceding to Saccharomyces spp. and Lactobacillales for the following year. After one year of fermentation, Brettanomyces bruxellensis was the dominant yeast population (occasionally accompanied by minor populations of Candida spp., Pichia spp., and other yeasts and Lactobacillales remained dominant, though various aerobic bacteria became more prevalent. This work demonstrates that ACA exhibits a conserved core microbial succession in absence of inoculation, supporting the role of a resident brewhouse microbiota. These findings establish this core microbial profile of spontaneous beer fermentations as a target for production control points and quality standards for these beers.

  20. PROBLEME ACTUALE ALE CERCETĂRII TEXTULUI DE VULGARIZARE MEDICALĂ MEDIATIZAT

    Directory of Open Access Journals (Sweden)

    Aia DAVID

    2018-05-01

    Full Text Available În articol sunt identificate caracteristicile lingvopragmatice ale textului de vulgarizare medicală mediatizat. Din perspectivă lingvistică și interacțională, textul medical prezintă anumite caracteristici identitare, cum ar fi lexicul, obiectivitatea, intertextualitatea specifică etc. Analiza parcursului textelor medicale spre publicul nespecialist permite delimitarea tehnicilor de vulgarizare prin folosirea figurilor de stil, reformulare, folosirea metalimbajului și a substi­tuirilor sinonimice.ACTUAL PROBLEMS IN THE RESEARCH OF THE MEDIATIZED TEXT  OF MEDICAL POPULARIZATIONThe purpose of this article is to identify the linguistic and pragmatic features of the mediatized text of medical popularization. From the linguistic and interactive perspectives, the medical text presents certain identifying features, such as vocabulary, objectivity, specific intertextuality, etc. The analysis of the medical texts course towards the non-specialist public allows delimitation of the popularization techniques using style figures, reformulation, meta-language and synonymous substitutions.

  1. PHYTOPLANKTON - WET WEIGHT and Other Data from ALE ANDRO DE HUMBOLDT From North Pacific Ocean from 19780108 to 19780407 (NODC Accession 9700071)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Phytoplankton data were collected from net tows from the Ale Andro De Humboldt in the North Pacific Ocean from 08 January 1978 to 07 April 1978. Chlorophyll A and...

  2. Computing Nash equilibria through computational intelligence methods

    Science.gov (United States)

    Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.

    2005-03-01

    Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.

  3. Inequality in Participation in Adult Learning and Education (ALE): Effects of Micro- and Macro-Level Factors through a Comparative Study

    Science.gov (United States)

    Lee, Jeongwoo

    2017-01-01

    The objectives of this dissertation include describing and analyzing the patterns of inequality in ALE participation at both the micro and macro levels. Special attention is paid to social origins of individual adults and their association with two groups of macro-level factors, social inequality (income, education, and skill inequality) and…

  4. A New Method to Simulate Free Surface Flows for Viscoelastic Fluid

    Directory of Open Access Journals (Sweden)

    Yu Cao

    2015-01-01

    Full Text Available Free surface flows arise in a variety of engineering applications. To predict the dynamic characteristics of such problems, specific numerical methods are required to accurately capture the shape of free surface. This paper proposed a new method which combined the Arbitrary Lagrangian-Eulerian (ALE technique with the Finite Volume Method (FVM to simulate the time-dependent viscoelastic free surface flows. Based on an open source CFD toolbox called OpenFOAM, we designed an ALE-FVM free surface simulation platform. In the meantime, the die-swell flow had been investigated with our proposed platform to make a further analysis of free surface phenomenon. The results validated the correctness and effectiveness of the proposed method for free surface simulation in both Newtonian fluid and viscoelastic fluid.

  5. Active life expectancy from annual follow-up data with missing responses.

    Science.gov (United States)

    Izmirlian, G; Brock, D; Ferrucci, L; Phillips, C

    2000-03-01

    Active life expectancy (ALE) at a given age is defined as the expected remaining years free of disability. In this study, three categories of health status are defined according to the ability to perform activities of daily living independently. Several studies have used increment-decrement life tables to estimate ALE, without error analysis, from only a baseline and one follow-up interview. The present work conducts an individual-level covariate analysis using a three-state Markov chain model for multiple follow-up data. Using a logistic link, the model estimates single-year transition probabilities among states of health, accounting for missing interviews. This approach has the advantages of smoothing subsequent estimates and increased power by using all follow-ups. We compute ALE and total life expectancy from these estimated single-year transition probabilities. Variance estimates are computed using the delta method. Data from the Iowa Established Population for the Epidemiologic Study of the Elderly are used to test the effects of smoking on ALE on all 5-year age groups past 65 years, controlling for sex and education.

  6. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  7. Sobre el uso de técnicas chopper para la reducción del ruido flicker en amplificadores para la captación de señales neuronales

    OpenAIRE

    Pérez Prieto, Norberto

    2016-01-01

    La captación de señales neuronales mediante electrodos conectados a circuitos micro-electrónicos es necesaria para aplicaciones clínicas y para el control de prótesis senso-motoras, entre otras muchas aplicaciones bio-médicas. En todas estas aplicaciones, la preservación de la información contenida en las imágenes captadas depende críticamente de las prestaciones de los amplificadores empleados en la cabecera de la cadena de procesamiento electrónica. El problema es que se trata de señales mu...

  8. Señalética como interfaz urbana: estudio de caso en Transmilenio - SITP, Bogotá

    OpenAIRE

    Chacón Chacón, Martha Estela

    2014-01-01

    Este trabajo desarrolla como idea central el rol de la señalética en Transmilenio como elemento fundamental en la relación funcional entre los sistemas de movilidad y de equipamentos planteada en el POT (2003), atendiendo al concepto de accesibilidad perceptual como articulador entre los dos sistemas. La investigación se desarrolló en tres niveles: el primer nivel desarrolla la base conceptual; el segundo nivel analiza documentos oficiales (POT 2003 y Plan Maestro de Movilidad), y bus...

  9. Computational methods for fluid dynamics

    CERN Document Server

    Ferziger, Joel H

    2002-01-01

    In its 3rd revised and extended edition the book offers an overview of the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. Included are advanced methods in computational fluid dynamics, like direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, free surface flows. The 3rd edition contains a new section dealing with grid quality and an extended description of discretization methods. The book shows common roots and basic principles for many different methods. The book also contains a great deal of practical advice for code developers and users, it is designed to be equally useful to beginners and experts. The issues of numerical accuracy, estimation and reduction of numerical errors are dealt with in detail, with many examples. A full-feature user-friendly demo-version of a commercial CFD software has been added, which ca...

  10. Methods for computing color anaglyphs

    Science.gov (United States)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  11. Numerical computer methods part D

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The aim of this volume is to brief researchers of the importance of data analysis in enzymology, and of the modern methods that have developed concomitantly with computer hardware. It is also to validate researchers' computer programs with real and synthetic data to ascertain that the results produced are what they expected. Selected Contents: Prediction of protein structure; modeling and studying proteins with molecular dynamics; statistical error in isothermal titration calorimetry; analysis of circular dichroism data; model comparison methods.

  12. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  13. EFICIENTIZAREA OBŢINERII SEDIMENTELOR FURAJERE B12 -VITAMINIZATE DIN APE REZIDUALE AGROINDUSTRIALE: 2. MODIFICĂRI ALE UTILAJULUI

    Directory of Open Access Journals (Sweden)

    Victor COVALIOV

    2017-03-01

    Full Text Available Prin modificări constructive ale dispozitivelor bioreactorului şi soluţii noi: separarea zonelor acetogenă şi meta­no­genă în interiorul bioreactorului, recircularea CO2 şi suplimentarea acestuia cu H2 exogen, adsorbţia vitaminei B12 din lichidul postfermentare cu diatomită s-a obţinut ridicarea conţinutului de vitamina B12 în sedimentele epurării fermentativ–metanogene (anaerobe a borhotului postalcoolic până la calitatea de con­centrat furajer B12-vitaminizat, concomitent cu intensificarea producerii biometanului, ca elemente ale ridicării eficienţei ecologo-economice a epurării anaerobe a borhotului.MORE EFFICIENT PRODUCITON OF VITAMINIZED FORAGE SLUDGE CONTAINING B12 FROM AGRO-INDUSTRIAL WASTES: 2. EQUIPMENT MODIFICAITON The increase in vitamin B12 contents in the sludge was obtained resulted from the methanogenic (anaerobic digestion of post-distillery vinasse, through the design modifications of bioreactor interiour, CO2 re-circulation and its interaction with additionally dosed exogenic hydrogen. Vitamin B12 was further adsorbed on diatomite surface from the post-digestion liquid, which made it possible to produce the cattle forage concentrate enriched with vitamin B12. At the same time, biomethane production was intensified, being the element of ecologically-economic efficiency of anaerobic treatment of vinasse (agro-industrial waste.

  14. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  15. Computational efficiency for the surface renewal method

    Science.gov (United States)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  16. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  17. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  18. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  19. Zonal methods and computational fluid dynamics

    International Nuclear Information System (INIS)

    Atta, E.H.

    1985-01-01

    Recent advances in developing numerical algorithms for solving fluid flow problems, and the continuing improvement in the speed and storage of large scale computers have made it feasible to compute the flow field about complex and realistic configurations. Current solution methods involve the use of a hierarchy of mathematical models ranging from the linearized potential equation to the Navier Stokes equations. Because of the increasing complexity of both the geometries and flowfields encountered in practical fluid flow simulation, there is a growing emphasis in computational fluid dynamics on the use of zonal methods. A zonal method is one that subdivides the total flow region into interconnected smaller regions or zones. The flow solutions in these zones are then patched together to establish the global flow field solution. Zonal methods are primarily used either to limit the complexity of the governing flow equations to a localized region or to alleviate the grid generation problems about geometrically complex and multicomponent configurations. This paper surveys the application of zonal methods for solving the flow field about two and three-dimensional configurations. Various factors affecting their accuracy and ease of implementation are also discussed. From the presented review it is concluded that zonal methods promise to be very effective for computing complex flowfields and configurations. Currently there are increasing efforts to improve their efficiency, versatility, and accuracy

  20. Computational and mathematical methods in brain atlasing.

    Science.gov (United States)

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  1. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  2. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  3. Influence of provenance on physical and mechanical properties wood of Pinus tropicalis Morelet in Viñales. Pinar del Río. Cuba

    Directory of Open Access Journals (Sweden)

    Yarelys García García

    2013-12-01

    Full Text Available The overall objective of this paper is to analyze the effect of origin on the physical-mechanical Pinus tropicalis Morelet wood, with a view to providing the information necessary for their rational use properties. Five provenances were selected in the experimental plots at the Experimental Station of Viñales, where 10 trees were randomly selected to analyze the dendrometer following variables: diameter at 1.30 total height, crown diameter and crown height. In turn, at the height of 1.30 m a log of 50 cm in length for the study Densities, Total shrinkage in volume, radial contraction, Longitudinal, tangential and compression is obtained. The results obtained state that the origin is not a variable that has a marked influence on the physical - mechanical properties analyzed. Diameter 1.30 and crown diameter dendrometric variables are best levels of correlation present in relation to the properties of the wood examined. Considering the results obtained provenances La Jagua and Viñales must be very careful during drying and commissioning since they have a higher coefficient of anisotropy.

  4. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos

    2017-01-01

    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  5. Interfaz humano-computadora basada en señales de electrooculografía para personas con discapacidad motriz

    OpenAIRE

    Daniel Pacheco Bautista; Ignacio Algredo Badillo; David De la Rosa Mejía; Aurelio Horacio Heredia Jiménez

    2014-01-01

    En este trabajo se presenta el desarrollo de un prototipo que asiste, a personas con cierta discapacidad motriz, en la interacción con la computadora de una forma simple y económica, mediante señales de electrooculografía. Esta técnica permite detectar los movimientos oculares basada en el registro de la diferencia de potencial existente entre la córnea y la retina, tal propiedad es aprovechada en este proyecto para controlar el desplazamiento del cursor del ratón de una forma precisa sobre l...

  6. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  7. Integración de un sistema de neuroseñales para detectar expresiones en el análisis de material multimedia

    Directory of Open Access Journals (Sweden)

    Luz Ángela Moreno-Cueva

    2014-12-01

    Full Text Available Presenta los avances realizados en la integración de un dispositivo comercial de bajo costo para capturar neuroseñales, con el fin de registrar expresiones de un usuario de material multimedia. Todas las personas adoptan diversos tipos de expresiones al ver televisión, películas, comerciales u otros textos. Ejemplos de estas expresiones son: apretar los dientes en escenas de suspenso; mover la cabeza hacia atrás cuando se da la sensación de arrojar un objeto hacia fuera de la pantalla en películas 3D; desviar la mirada en las escenas de terror; sonreír en comerciales emotivos; dar carcajadas en escenas de humor, e incluso quedarse dormido por el desinterés. La idea general de este sistema es capturar todas estas expresiones en conjunto, con señales emotivas tales como el nivel de atención, frustración y meditación, para que los expertos en creación del material multimedia puedan realizar un análisis y mejorar sus productos. Se presentan pruebas experimentales que evidencian el buen funcionamiento del sistema.

  8. Fibonacci’s Computation Methods vs Modern Algorithms

    Directory of Open Access Journals (Sweden)

    Ernesto Burattini

    2013-12-01

    Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.

  9. New or improved computational methods and advanced reactor design

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  10. Drawing and writing: An ALE meta-analysis of sensorimotor activations.

    Science.gov (United States)

    Yuan, Ye; Brown, Steven

    2015-08-01

    Drawing and writing are the two major means of creating what are referred to as "images", namely visual patterns on flat surfaces. They share many sensorimotor processes related to visual guidance of hand movement, resulting in the formation of visual shapes associated with pictures and words. However, while the human capacity to draw is tens of thousands of years old, the capacity for writing is only a few thousand years old, and widespread literacy is quite recent. In order to compare the neural activations for drawing and writing, we conducted two activation likelihood estimation (ALE) meta-analyses for these two bodies of neuroimaging literature. The results showed strong overlap in the activation profiles, especially in motor areas (motor cortex, frontal eye fields, supplementary motor area, cerebellum, putamen) and several parts of the posterior parietal cortex. A distinction was found in the left posterior parietal cortex, with drawing showing a preference for a ventral region and writing a dorsal region. These results demonstrate that drawing and writing employ the same basic sensorimotor networks but that some differences exist in parietal areas involved in spatial processing. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. COMPARATIVO DE LOS ALGORITMOS DE DIMENSIÓN FRACTAL HIGUCHI, KATZ Y MULTIRESOLUCIÓN DE CONTEO DE CAJAS EN SEÑALES EEG BASADAS EN POTENCIALES RELACIONADOS POR EVENTOS

    Directory of Open Access Journals (Sweden)

    Santiago Fernández Fraga

    Full Text Available La obtención de información por medio de la medición de señales registradas durante diferentes procesos o condiciones fisiológicas del cerebro es importante para poder desarrollar interfaces computacionales que traduzcan las señales eléctricas cerebrales a comandos computacionales de control. Un electroencefalograma (EEG registra la actividad eléctrica del cerebro en respuesta al recibir diferentes estímulos externos (potenciales por eventos. El análisis de estas señales permite identificar y distinguir estados específicos de la función fisiológica del cerebro. La Dimensión Fractal se ha utilizado como una herramienta para el análisis de formas de ondas biomédicas, en particular se ha utilizado para determinar la medida de la complejidad en series de tiempo generadas por EEG. El presente documento pretende analizar la base de datos HeadIT de series de tiempo biomédicas obtenidas por EEG a las cuales se obtendrán la FD por medio de los métodos Higuchi, Katz y Multi-resolución de Conteo de Cajas, que muestre la relación entre el método para la obtención de la Dimensión Fractal y la condición fisiológica de la señal basada en Potenciales Cerebrales Relacionados por Eventos.

  12. Different patterns and development characteristics of processing written logographic characters and alphabetic words: an ALE meta-analysis.

    Science.gov (United States)

    Zhu, Linlin; Nie, Yaoxin; Chang, Chunqi; Gao, Jia-Hong; Niu, Zhendong

    2014-06-01

    The neural systems for phonological processing of written language have been well identified now, while models based on these neural systems are different for different language systems or age groups. Although each of such models is mostly concordant across different experiments, the results are sensitive to the experiment design and intersubject variability. Activation likelihood estimation (ALE) meta-analysis can quantitatively synthesize the data from multiple studies and minimize the interstudy or intersubject differences. In this study, we performed two ALE meta-analysis experiments: one was to examine the neural activation patterns of the phonological processing of two different types of written languages and the other was to examine the development characteristics of such neural activation patterns based on both alphabetic language and logographic language data. The results of our first meta-analysis experiment were consistent with the meta-analysis which was based on the studies published before 2005. And there were new findings in our second meta-analysis experiment, where both adults and children groups showed great activation in the left frontal lobe, the left superior/middle temporal gyrus, and the bilateral middle/superior occipital gyrus. However, the activation of the left middle/inferior frontal gyrus was found increase with the development, and the activation was found decrease in the following areas: the right claustrum and inferior frontal gyrus, the left inferior/medial frontal gyrus, the left middle/superior temporal gyrus, the right cerebellum, and the bilateral fusiform gyrus. It seems that adults involve more phonological areas, whereas children involve more orthographic areas and semantic areas. Copyright © 2013 Wiley Periodicals, Inc.

  13. Validación de señales vibro-acústicas para el diagnóstico de fallas en rodamientos en un generador síncrono

    Directory of Open Access Journals (Sweden)

    Zulma Yadira Medrano Hurtado

    2017-01-01

    Full Text Available En este trabajo se describe el procedimiento y las herramientas utilizados en la medición y diagnóstico de señales de vibración capturadas a través de transductores de aceleración (acelerómetros piezoeléctricos y acústicos (micrófonos omnidireccionales. Además, se desarrolló un arreglo experimental empleando la metodología Taguchi para validar la información registrada de las señales de vibración para rodamientos sin falla y con falla artificial, respectivamente. La falla artificial consistió en una grieta producida en la jaula de un rodamiento SKF-6303-2RSH.Este método es no invasivo, ya que utiliza micrófonos para analizar la vibración, lo que representa no tener que montar ningún tipo de transductor en la máquina, además de ser sensible a fallas en la jaula.

  14. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  15. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  16. Adopting an Evidence-Based Lifestyle Physical Activity Program: Dissemination Study Design and Methods.

    Science.gov (United States)

    Dunn, Andrea L; Buller, David B; Dearing, James W; Cutter, Gary; Guerra, Michele; Wilcox, Sara; Bettinghaus, Erwin P

    2012-06-01

    BACKGROUND: There is a scarcity of research studies that have examined academic-commercial partnerships to disseminate evidence-based physical activity programs. Understanding this approach to dissemination is essential because academic-commercial partnerships are increasingly common. Private companies have used dissemination channels and strategies to a degree that academicians have not, and declining resources require academicians to explore these partnerships. PURPOSE: This paper describes a retrospective case-control study design including the methods, demographics, organizational decision-making, implementation rates, and marketing strategy for Active Living Every Day (ALED), an evidence-based lifestyle physical activity program that has been commercially available since 2001. Evidence-based public health promotion programs rely on organizations and targeted sectors to disseminate these programs although relatively little is known about organizational-level and sector-level influences that lead to their adoption and implementation. METHODS: Cases (n=154) were eligible if they had signed an ALED license agreement with Human Kinetics (HK), publisher of the program's textbooks and facilitator manuals, between 2001 and 2008. Two types of controls were matched (2:2:1) and stratified by sector and region. Active controls (Control 1; n=319) were organizations that contacted HK to consider adopting ALED. Passive controls (Control 2; n=328) were organizations that received unsolicited marketing materials and did not initiate contact with HK. We used Diffusion of Innovations Theory (DIT) constructs as the basis for developing the survey of cases and controls. RESULTS: Using the multi-method strategy recommended by Dillman, a total of n=801 cases and controls were surveyed. Most organizations were from the fitness sector followed by medical, nongovernmental, governmental, educational, worksite and other sectors with significantly higher response rates from government

  17. Electromagnetic computation methods for lightning surge protection studies

    CERN Document Server

    Baba, Yoshihiro

    2016-01-01

    This book is the first to consolidate current research and to examine the theories of electromagnetic computation methods in relation to lightning surge protection. The authors introduce and compare existing electromagnetic computation methods such as the method of moments (MOM), the partial element equivalent circuit (PEEC), the finite element method (FEM), the transmission-line modeling (TLM) method, and the finite-difference time-domain (FDTD) method. The application of FDTD method to lightning protection studies is a topic that has matured through many practical applications in the past decade, and the authors explain the derivation of Maxwell's equations required by the FDTD, and modeling of various electrical components needed in computing lightning electromagnetic fields and surges with the FDTD method. The book describes the application of FDTD method to current and emerging problems of lightning surge protection of continuously more complex installations, particularly in critical infrastructures of e...

  18. Computational methods for data evaluation and assimilation

    CERN Document Server

    Cacuci, Dan Gabriel

    2013-01-01

    Data evaluation and data combination require the use of a wide range of probability theory concepts and tools, from deductive statistics mainly concerning frequencies and sample tallies to inductive inference for assimilating non-frequency data and a priori knowledge. Computational Methods for Data Evaluation and Assimilation presents interdisciplinary methods for integrating experimental and computational information. This self-contained book shows how the methods can be applied in many scientific and engineering areas. After presenting the fundamentals underlying the evaluation of experiment

  19. Computer Anti-forensics Methods and their Impact on Computer Forensic Investigation

    OpenAIRE

    Pajek, Przemyslaw; Pimenidis, Elias

    2009-01-01

    Electronic crime is very difficult to investigate and prosecute, mainly\\ud due to the fact that investigators have to build their cases based on artefacts left\\ud on computer systems. Nowadays, computer criminals are aware of computer forensics\\ud methods and techniques and try to use countermeasure techniques to efficiently\\ud impede the investigation processes. In many cases investigation with\\ud such countermeasure techniques in place appears to be too expensive, or too\\ud time consuming t...

  20. Desarrollo de una interfaz gráfica, drivers y librerías para hardware de adquisición y generación de señales

    OpenAIRE

    Bernardino Perez, Daniel

    2012-01-01

    Projecte realitzat en col·laboració amb Sygnadyne Desarrollo de una interfaz gráfica en C++/Qt, drivers para windows (WDF) y librerías para hardware de adquisición y generación de señales basado en la plataforma PXI.

  1. Computational and instrumental methods in EPR

    CERN Document Server

    Bender, Christopher J

    2006-01-01

    Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...

  2. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  3. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  4. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  5. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Classical versus Computer Algebra Methods in Elementary Geometry

    Science.gov (United States)

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  7. Comparison of Five Computational Methods for Computing Q Factors in Photonic Crystal Membrane Cavities

    DEFF Research Database (Denmark)

    Novitsky, Andrey; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2017-01-01

    Five state-of-the-art computational methods are benchmarked by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities. The convergence of the methods with respect to resolution, degrees of freedom and number of modes is investigated. Specia...

  8. Methods in computed angiotomography of the brain

    International Nuclear Information System (INIS)

    Yamamoto, Yuji; Asari, Shoji; Sadamoto, Kazuhiko.

    1985-01-01

    Authors introduce the methods in computed angiotomography of the brain. Setting of the scan planes and levels and the minimum dose bolus (MinDB) injection of contrast medium are described in detail. These methods are easily and safely employed with the use of already propagated CT scanners. Computed angiotomography is expected for clinical applications in many institutions because of its diagnostic value in screening of cerebrovascular lesions and in demonstrating the relationship between pathological lesions and cerebral vessels. (author)

  9. MANIFESTĂRI ALE CONFLICTULUI MUNCĂ-FAMILIE LA ANGAJAȚII DIN DOMENIUL PUBLIC/PRIVAT: DIMENSIUNI COMPARATIVE

    Directory of Open Access Journals (Sweden)

    Viorica ȘAITAN

    2018-03-01

    Full Text Available Una dintre preocupările constante ale cercetărilor organizaționale recente este legată de investigarea problematicii privind conflictul muncă-familie, ce apare ca urmare a incapacității persoanei de a integra în mod optim solicitările rolului familial şi cele ale rolului profesional, având preponderent drept consecințe stări emoționale, cognitive și comportamentale negative. Din această perspectivă, studiul de față prezintă rezultatele comparative privind felul în care este trăit conflictul muncă-familie la angajații din Republica Moldova (în total – 355 de subiecți, 184 de angajați în domeniul public și 171 de angajați în domeniul privat și consecințele acestuia la nivel afectiv (epuizare emoțională, cognitiv (percepția suportului organizațional și percepția satisfacției de viață și comportamental (angajamentul de rol, flexibilitate muncă-familie.MANIFESTATIONS OF THE WORK-FAMILY CONFLICT TO PUBLIC/PRIVATE EMPLOYEES: COMPARATIVE DIMENSIONSOne of the constant concerns of recent organizational research is related to the investigation of the problem of work-family conflict, which arises as a result of the inability of the person to optimally integrate the demands of the family role and those of the professional role, having mainly as a consequence of negative emotional, cognitive and behavioral states. From this perspective, the present study presents comparative results on the way in which the labor-family conflict is experienced by employees in the Republic of Moldova (total – 355 subjects, 184 employees in the public domain and 171 employees in the private domain and its consequences at the affective level (emotional exhaustion, cognitive level (the perception of the organizational support and the perception of life satisfaction and behavioral level (role commitment, work-family flexibility.

  10. Variational-moment method for computing magnetohydrodynamic equilibria

    International Nuclear Information System (INIS)

    Lao, L.L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed

  11. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  12. Análisis preliminar del cuestionario señales de alerta de recaída (AWARE en drogodependientes peruanos

    Directory of Open Access Journals (Sweden)

    Cristian Solano

    2017-08-01

    Full Text Available RESUMEN: El objetivo del presente estudio fue analizar la estructura interna del cuestionario AWARE 3.0 en drogodependentientes. Fueron evaluados 240 sujetos en tratamiento residencial (hombres n=205 y mujeres n=35 entre 18 y 61 años con la escala de señales de alerta a recaída AWARE. Los análisis confirmaron la existencia de un solo factor, además se probaron cinco modelos confirmatorios incluyendo el factor de método que demostró influir en el modelo original. El análisis de confiabilidad obtuvo puntuaciones adecuadas tanto para variables observadas como variables latentes que representaron igualdad a nivel conceptual y de unidades (modelo congenérico y tau-equivalente. Los resultados indican un mejor ajuste solo con el modelo de ítems directos además de plantearse una versión breve. Estos hallazgos brindan una nueva perspectiva sobre la estructura del instrumento y una nueva versión que ayude a complementar la evaluación en el proceso de evaluación y detección de señales de alerta a recaídas. ABSTRACT: The objective of the present study was to analyze the internal structure of the AWARE 3.0 questionnaire in drug addicts. A total of 240 subjects undergoing residential treatment (males n = 205 and females n = 35 between 18 and 61 years with the AWARE relapse alert scale were evaluated. The analyzes confirmed the existence of a single factor, in addition five confirmatory models were tested including the factor of method that demonstrated to influence in the original model. The reliability analysis obtained adequate scores for both observed and latent variables that represented equality at the conceptual and unit level (congeneric and tau-equivalent models. The results indicate a better fit only with the direct item model in addition to a short version. These findings provide a new perspective on the structure of the instrument and a new version that helps complement the evaluation in the process of evaluation and detection of

  13. Improved look-up table method of computer-generated holograms.

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  14. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  15. Meditation-related activations are modulated by the practices needed to obtain it and by the expertise: an ALE meta-analysis study

    Science.gov (United States)

    Tomasino, Barbara; Fregona, Sara; Skrap, Miran; Fabbro, Franco

    2013-01-01

    The brain network governing meditation has been studied using a variety of meditation practices and techniques practices eliciting different cognitive processes (e.g., silence, attention to own body, sense of joy, mantras, etc.). It is very possible that different practices of meditation are subserved by largely, if not entirely, disparate brain networks. This assumption was tested by conducting an activation likelihood estimation (ALE) meta-analysis of meditation neuroimaging studies, which assessed 150 activation foci from 24 experiments. Different ALE meta-analyses were carried out. One involved the subsets of studies involving meditation induced through exercising focused attention (FA). The network included clusters bilaterally in the medial gyrus, the left superior parietal lobe, the left insula and the right supramarginal gyrus (SMG). A second analysis addressed the studies involving meditation states induced by chanting or by repetition of words or phrases, known as “mantra.” This type of practice elicited a cluster of activity in the right SMG, the SMA bilaterally and the left postcentral gyrus. Furthermore, the last analyses addressed the effect of meditation experience (i.e., short- vs. long-term meditators). We found that frontal activation was present for short-term, as compared with long-term experience meditators, confirming that experts are better enabled to sustain attentional focus, rather recruiting the right SMG and concentrating on aspects involving disembodiment. PMID:23316154

  16. Meditation related activations are modulated by the practices needed to obtain it and by the expertise:an ALE meta-analysis study

    Directory of Open Access Journals (Sweden)

    Barbara eTomasino

    2013-01-01

    Full Text Available The brain network governing meditation has been studied using a variety of meditation practices and techniques practices eliciting different cognitive processes (e.g., silence, attention to own body, sense of joy, mantras, etc.. It is very possible that different practices of meditation are subserved by largely, if not entirely, disparate brain networks. This assumption was tested by conducting an activation likelihood estimation (ALE meta-analysis of meditation neuroimaging studies, which assessed 150 activation foci from 24 experiments. Different ALE meta-analyses were carried out. One involved the subsets of studies involving meditation induced through exercising focused attention. The network included clusters bilaterally in the medial gyrus, the left superior parietal lobe, the left insula and the right supramarginal gyrus. A second analysis addressed the studies involving meditation states induced by chanting or by repetition of words or phrases, known as ‘mantra’. This type of practice elicited a cluster of activity in the right supramarginal gyrus, the SMA bilaterally and the left postcentral gyrus. Furthermore, the last analyses addressed the effect of meditation experience (i.e., short- vs. long-term meditators. We found that frontal activation was present for short-term, as compared with long-term experience meditators, confirming that experts are better enabled to sustain attentional focus, rather recruiting the right supramarginal gyrus and concentrating on aspects involving disembodiment.

  17. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  18. Active Life Expectancy and Functional Health Transition among Filipino Older People

    Directory of Open Access Journals (Sweden)

    Grace T. Cruz

    2007-12-01

    Full Text Available The study provides a baseline information on the functional health transition patterns of older people and computes for the Active Life Expectancy (ALE using a multistate life table method. Findings on ALE demonstrate that females and urban residents live longer and have a greater proportion of their remaining life in active state compared to their counterparts. Health transition analysis indicates a significant proportion experiencing recovery. Age, sex, place of residence and health status/behavior indicators (self-assessed health, drinking and exercise display a significant influence on future health and mortality trajectories although surprisingly, education did not show any significant effect.

  19. INTERFAZ CEREBRO COMPUTADOR BASADO EN SEÑALES EEG PARA EL CONTROL DE MOVIMIENTO DE UNA PROTESIS DE MANO USANDO ANFIS

    Directory of Open Access Journals (Sweden)

    Alexandra Bedoya-Rojas

    2014-09-01

    Full Text Available Actualmente, existe un gran número de personas en el mundo que presentan amputación de miembros que son reemplazados usualmente por prótesis mecánicas. Por otro lado las prótesis electrómecanicas han venido tomando fuerza y son apoyadas por diferentes tipos de interfaces como las interfaces cerebro computador que han permitido mejorar la funcionalidad de estas, y a pesar de mostrar resultados representativos para el control de prótesis, aun es un campo abierto de investigación que busca mejorar su eficacia y eficiencia. En este estudio, se presenta una metodología de clasificación de señales electroencefalográficas (EEG para el control del movimiento de una prótesis de mano, basada en el sistema de inferencia neuro-difuso adaptativo (ANFIS aplicado a características obtenidas de la transformada wavelet (TW y los conjuntos difusos rough (FRS a señales EEG obtenidas en el sistema 10-10. De esta forma el rendimiento del sistema propuesto fue medido utilizando validación cruzada 70-30 con 30 repeticiones obteniendo un alto desempeño en términos de precisión, lo que significa que este modelo tiene potencial como clasificador en la detección de los cambios EEG para la generación de comandos para el control del movimiento de la mano

  20. Computational methods for two-phase flow and particle transport

    CERN Document Server

    Lee, Wen Ho

    2013-01-01

    This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.

  1. Computational methods for structural load and resistance modeling

    Science.gov (United States)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  2. Numerical evaluation of methods for computing tomographic projections

    International Nuclear Information System (INIS)

    Zhuang, W.; Gopal, S.S.; Hebert, T.J.

    1994-01-01

    Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived

  3. Generador de Señales y Amplificador de Alto Voltaje para una Instalación de Medición del Lazo de Histéresis Ferroeléctrica

    Directory of Open Access Journals (Sweden)

    J. I. Benavides Benitez

    2011-03-01

    Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} El siguiente trabajo describe un generador de señales y un amplificador de alto voltaje usados para generar y amplificar las  señales necesarias para  una instalación de obtención del lazo de histéresis en muestras de materiales ferroeléctricos. Se describen también los programas desarrollados para controlar el generador desde la computadora con ayuda de un sistema de adquisición de datos.

  4. Computer methods in physics 250 problems with guided solutions

    CERN Document Server

    Landau, Rubin H

    2018-01-01

    Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). It’s also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.

  5. Reconocimiento de valvulopatías cardíacas en señales de fonocardiografía empleando la transformada Gabor

    OpenAIRE

    Echeverry Correa, Julián David; López, Andrés Felipe; López, Juan Fernando

    2007-01-01

    Se presenta en este trabajo una metodología de caracterización basada en la representación tiempo frecuencia de las señales fonocardiográficas con el fin de hacer reconocimiento de valvulopatías cardíacas. La naturaleza de estas patologías las hace susceptibles a ser caracterizadas por medio de representaciones en el espacio conjunto tiempo-frecuencia. Se emplea la transformada de Gabor para llevar los registros a este tipo bidimensional de representación. Los porcentajes de clasificación, me...

  6. Proceedings of computational methods in materials science

    International Nuclear Information System (INIS)

    Mark, J.E. Glicksman, M.E.; Marsh, S.P.

    1992-01-01

    The Symposium on which this volume is based was conceived as a timely expression of some of the fast-paced developments occurring throughout materials science and engineering. It focuses particularly on those involving modern computational methods applied to model and predict the response of materials under a diverse range of physico-chemical conditions. The current easy access of many materials scientists in industry, government laboratories, and academe to high-performance computers has opened many new vistas for predicting the behavior of complex materials under realistic conditions. Some have even argued that modern computational methods in materials science and engineering are literally redefining the bounds of our knowledge from which we predict structure-property relationships, perhaps forever changing the historically descriptive character of the science and much of the engineering

  7. Numismatic iconography over two evidences of sigillata hispanica from the roman town of Los Bañales (Uncastillo, Zaragoza

    Directory of Open Access Journals (Sweden)

    Javier Andreu Pintado

    2012-09-01

    Full Text Available The following paper presents two new evidences of hispanic sigillata from the roman city of Los Bañales (Uncastillo, Zaragoza, in the center of the Vascones area, that show decorations done with Marcus Aurelius’ coins. Both evidences are analyzed in the context of this habit in the spanish productions of pottery in Tricio firstable in Flavian times and then, briefly, during II century AD. The reflection goes, also, on a interpretative study of the phaenomenon testified in the hispanic sigillata in the first years of Marcus Aurelius and Lucius Verus’ reign offering a plausible explanation of its reasons.

  8. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  9. Computational and experimental methods for enclosed natural convection

    International Nuclear Information System (INIS)

    Larson, D.W.; Gartling, D.K.; Schimmel, W.P. Jr.

    1977-10-01

    Two computational procedures and one optical experimental procedure for studying enclosed natural convection are described. The finite-difference and finite-element numerical methods are developed and several sample problems are solved. Results obtained from the two computational approaches are compared. A temperature-visualization scheme using laser holographic interferometry is described, and results from this experimental procedure are compared with results from both numerical methods

  10. Decodificación de Movimientos Individuales de los Dedos y Agarre a Partir de Señales Mioeléctricas de Baja Densidad

    Directory of Open Access Journals (Sweden)

    John J. Villarejo Mayor

    2017-04-01

    of able-bodied subjects. Different methods were analyzed to classify individual fingers flexion, hand gestures and different grasps using four electrodes and considering the low level of muscle contraction in these tasks. Multiple features of sEMG signals were also analyzed considering traditional magnitude-based features and fractal analysis. Statistical significance was computed for all the methods using different set of features, for both groups of subjects (able-bodied and amputees. For amputees, results showed accuracy up to 99.4% for individual finger movements, higher than the achieved by grasp movements, up to 93.3%. Best performance was achieved using support vector machine (SVM, followed very closely by K-nearest neighbors (KNN. However, KNN produces a better global performance because it is faster than SVM, which implies an advantage for real-time applications. The results show that the method here proposed is suitable for accurately controlling dexterous prosthetic hands, providing more functionality and better acceptance for amputees. Palabras clave: Señales electromiográficas, prótesis de miembro superior, reconocimiento de patrones, tareas de destreza de la mano, Keywords: Myoelectric signals, upper-limb prosthesis, superficial electromyography low density, dexterous hand gestures, pattern recognition

  11. Computational methods for stellerator configurations

    International Nuclear Information System (INIS)

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings

  12. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  13. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  14. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  15. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  16. Prueba chi-cuadrado y enfoque de señales como modelos de alerta temprana de crisis bancarias: aplicación al caso ecuatoriano

    OpenAIRE

    Rumbea Pavisic, Juan Francisco; Ayala Salcedo, Roberto Andres

    2009-01-01

    Este trabajo trata de modelar la crisis bancaria del Ecuador en 1995-1996. Para esto se analiza las situaciones específicas que diferencian a los bancos de otras empresas en una economía de mercado. Luego, se analiza la experiencia de la crisis. Finalmente se elabora un modelo de alerta temprana para la crisis bancaria de 1995-1996 basado en el enfoque de señales y prueba Chi-cuadrado.

  17. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  18. GRAPH-BASED POST INCIDENT INTERNAL AUDIT METHOD OF COMPUTER EQUIPMENT

    Directory of Open Access Journals (Sweden)

    I. S. Pantiukhin

    2016-05-01

    Full Text Available Graph-based post incident internal audit method of computer equipment is proposed. The essence of the proposed solution consists in the establishing of relationships among hard disk damps (image, RAM and network. This method is intended for description of information security incident properties during the internal post incident audit of computer equipment. Hard disk damps receiving and formation process takes place at the first step. It is followed by separation of these damps into the set of components. The set of components includes a large set of attributes that forms the basis for the formation of the graph. Separated data is recorded into the non-relational database management system (NoSQL that is adapted for graph storage, fast access and processing. Damps linking application method is applied at the final step. The presented method gives the possibility to human expert in information security or computer forensics for more precise, informative internal audit of computer equipment. The proposed method allows reducing the time spent on internal audit of computer equipment, increasing accuracy and informativeness of such audit. The method has a development potential and can be applied along with the other components in the tasks of users’ identification and computer forensics.

  19. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  20. Application of statistical method for FBR plant transient computation

    International Nuclear Information System (INIS)

    Kikuchi, Norihiro; Mochizuki, Hiroyasu

    2014-01-01

    Highlights: • A statistical method with a large trial number up to 10,000 is applied to the plant system analysis. • A turbine trip test conducted at the “Monju” reactor is selected as a plant transient. • A reduction method of trial numbers is discussed. • The result with reduced trial number can express the base regions of the computed distribution. -- Abstract: It is obvious that design tolerances, errors included in operation, and statistical errors in empirical correlations effect on the transient behavior. The purpose of the present study is to apply above mentioned statistical errors to a plant system computation in order to evaluate the statistical distribution contained in the transient evolution. A selected computation case is the turbine trip test conducted at 40% electric power of the prototype fast reactor “Monju”. All of the heat transport systems of “Monju” are modeled with the NETFLOW++ system code which has been validated using the plant transient tests of the experimental fast reactor Joyo, and “Monju”. The effects of parameters on upper plenum temperature are confirmed by sensitivity analyses, and dominant parameters are chosen. The statistical errors are applied to each computation deck by using a pseudorandom number and the Monte-Carlo method. The dSFMT (Double precision SIMD-oriented Fast Mersenne Twister) that is developed version of Mersenne Twister (MT), is adopted as the pseudorandom number generator. In the present study, uniform random numbers are generated by dSFMT, and these random numbers are transformed to the normal distribution by the Box–Muller method. Ten thousands of different computations are performed at once. In every computation case, the steady calculation is performed for 12,000 s, and transient calculation is performed for 4000 s. In the purpose of the present statistical computation, it is important that the base regions of distribution functions should be calculated precisely. A large number of

  1. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  2. BLUES function method in computational physics

    Science.gov (United States)

    Indekeu, Joseph O.; Müller-Nedebock, Kristian K.

    2018-04-01

    We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.

  3. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  4. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  5. Comparison of DNA-based techniques for differentiation of production strains of ale and lager brewing yeast.

    Science.gov (United States)

    Kopecká, J; Němec, M; Matoulková, D

    2016-06-01

    Brewing yeasts are classified into two species-Saccharomyces pastorianus and Saccharomyces cerevisiae. Most of the brewing yeast strains are natural interspecies hybrids typically polyploids and their identification is thus often difficult giving heterogenous results according to the method used. We performed genetic characterization of a set of the brewing yeast strains coming from several yeast culture collections by combination of various DNA-based techniques. The aim of this study was to select a method for species-specific identification of yeast and discrimination of yeast strains according to their technological classification. A group of 40 yeast strains were characterized using PCR-RFLP analysis of ITS-5·8S, NTS, HIS4 and COX2 genes, multiplex PCR, RAPD-PCR of genomic DNA, mtDNA-RFLP and electrophoretic karyotyping. Reliable differentiation of yeast to the species level was achieved by PCR-RFLP of HIS4 gene. Numerical analysis of the obtained RAPD-fingerprints and karyotype revealed species-specific clustering corresponding with the technological classification of the strains. Taxonomic position and partial hybrid nature of strains were verified by multiplex PCR. Differentiation among species using the PCR-RFLP of ITS-5·8S and NTS region was shown to be unreliable. Karyotyping and RFLP of mitochondrial DNA evinced small inaccuracies in strain categorization. PCR-RFLP of HIS4 gene and RAPD-PCR of genomic DNA are reliable and suitable methods for fast identification of yeast strains. RAPD-PCR with primer 21 is a fast and reliable method applicable for differentiation of brewing yeasts with only 35% similarity of fingerprint profile between the two main technological groups (ale and lager) of brewing strains. It was proved that PCR-RFLP method of HIS4 gene enables precise discrimination among three technologically important Saccharomyces species. Differentiation of brewing yeast to the strain level can be achieved using the RAPD-PCR technique. © 2016 The

  6. Computational botany methods for automated species identification

    CERN Document Server

    Remagnino, Paolo; Wilkin, Paul; Cope, James; Kirkup, Don

    2017-01-01

    This book discusses innovative methods for mining information from images of plants, especially leaves, and highlights the diagnostic features that can be implemented in fully automatic systems for identifying plant species. Adopting a multidisciplinary approach, it explores the problem of plant species identification, covering both the concepts of taxonomy and morphology. It then provides an overview of morphometrics, including the historical background and the main steps in the morphometric analysis of leaves together with a number of applications. The core of the book focuses on novel diagnostic methods for plant species identification developed from a computer scientist’s perspective. It then concludes with a chapter on the characterization of botanists' visions, which highlights important cognitive aspects that can be implemented in a computer system to more accurately replicate the human expert’s fixation process. The book not only represents an authoritative guide to advanced computational tools fo...

  7. Computation of saddle-type slow manifolds using iterative methods

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2015-01-01

    with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the Fitz......This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...

  8. An arbitrary Lagrangian-Eulerian method for interfacial flows with insoluble surfactants

    Science.gov (United States)

    Yang, Xiaofeng

    Interfacial flows, fluid flows involving two or more fluids that do not mix, are common in many natural and industrial processes such as rain drop formation, crude oil recovery, polymer blending, fuel spray formation, and so on. Surfactants (surface active substances) play an important role in such processes because they significantly change the interfacial dynamics. In this thesis, an arbitrary Lagrangian-Eulerian (ALE) method has been developed to numerically simulate interfacial flows with insoluble surfactants. The interface is captured using a coupled level set and volume of fluid method. To evolve the surfactant concentration, the method directly tracks the surfactant mass and the interfacial area. The surfactant concentration, which determines the local surface tension through an equation of state, is then computed as surfactant mass per interfacial area. By directly tracking the surfactant mass, the method conserves the surfactant mass exactly. To accurately approximate the interfacial area, the fluid interface is reconstructed using piecewise parabolas. The evolution of the level set function, volume fraction, interfacial area, and the surfactant mass is performed using an ALE approach. The fluid flow is governed by Stokes equations, which are solved using a finite element method. The surface forces are included in the momentum equation using a continuum surface stress formulation. To efficiently resolve the complex interfacial dynamics, interfacial regions of high surface curvature, and near contact regions between two interacting interfaces, the grid near the interface is adaptively refined. The method is extendible to axisymmetric and 3D spaces, and can be coupled with other flow solvers, such as Navier-Stokes and viscoelastic flow solvers, as well. The method has been applied to study the effect of surfactants on drop deformation and breakup in an extensional flow. Drop deformation results are compared with available experimental and theoretical

  9. Nuevo Enfoque para la Clasificación de Señales EEG usando la Varianza de la Diferencia entre las Clases de un Clasificador Bayesiano

    Directory of Open Access Journals (Sweden)

    Thomaz R. Botelho

    2017-10-01

    Full Text Available Resumen: Los avances en robótica de rehabilitación están beneficiando en gran medida a los pacientes con discapacidad física. Los dispositivos de asistencia y rehabilitación pueden basar su funcionamiento en información fisiológica de los músculos y del cerebro a través de electromiografía (EMG y electroencefalografía (EEG, para detectar la intención de movimiento de los usuarios. En este trabajo se presenta una propuesta de interfaz multimodal para la adquisición, sincronización y procesamiento de señales EEG y de sensores inerciales, para ser aplicada en tareas de rehabilitación con exoesqueletos robóticos. Se realizaron experimentos con individuos sanos con el objetivo de analizar la intención de movimiento, la activación muscular e inicio de movimiento durante los movimientos de extensión de la rodilla. Esta propuesta es un nuevo enfoque para la clasificación de señales EEG usando un clasificador bayesiano tomando en cuenta la varianza de la diferencia entre las clases usadas. El aporte de este trabajo se sustenta con los resultados que muestran un incremento del 30% en la precisión de clasificación con señales EEG en comparación con los enfoques tradicionales de clasificación, en un análisis off-line para el reconocimiento de la intención de movimiento de los miembros inferiores. Abstract: Patients with physical disabilities can benefit from robotic rehabilitation. This improves the efficiency of recovery and, therefore, the rehabilitation of the patient. Assistive and rehabilitation devices can make use of physiological data, such as electromyography (EMG and electroencephalography (EEG, in order to detect movement intentions. This work presents a multimodal interface for signal acquisition, synchronization and processing of EEG and inertial sensors signals, to be applied in rehabilitation robotic exoskeletons. Experiments were performed with healthy individuals executing knee extension. The goal is to analyze

  10. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  11. Computational methods for protein identification from mass spectrometry data.

    Directory of Open Access Journals (Sweden)

    Leo McHugh

    2008-02-01

    Full Text Available Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology.

  12. Computer science handbook. Vol. 13.3. Environmental computer science. Computer science methods for environmental protection and environmental research

    International Nuclear Information System (INIS)

    Page, B.; Hilty, L.M.

    1994-01-01

    Environmental computer science is a new partial discipline of applied computer science, which makes use of methods and techniques of information processing in environmental protection. Thanks to the inter-disciplinary nature of environmental problems, computer science acts as a mediator between numerous disciplines and institutions in this sector. The handbook reflects the broad spectrum of state-of-the art environmental computer science. The following important subjects are dealt with: Environmental databases and information systems, environmental monitoring, modelling and simulation, visualization of environmental data and knowledge-based systems in the environmental sector. (orig.) [de

  13. A computational method for sharp interface advection

    DEFF Research Database (Denmark)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volu...

  14. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  15. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  16. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  17. Comparison of four computational methods for computing Q factors and resonance wavelengths in photonic crystal membrane cavities

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn; Burger, Sven

    2016-01-01

    We benchmark four state-of-the-art computational methods by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities.The convergence of the methods with respect to resolution, degrees of freedom and number ofmodes is investigated. Special att...... attention is paid to the influence of the size of the computational domain. Convergence is not obtained for some of the methods, indicating that some are moresuitable than others for analyzing line defect cavities....

  18. The Direct Lighting Computation in Global Illumination Methods

    Science.gov (United States)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  19. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  20. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  1. Evolutionary Computing Methods for Spectral Retrieval

    Science.gov (United States)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  2. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  3. Computational methods for industrial radiation measurement applications

    International Nuclear Information System (INIS)

    Gardner, R.P.; Guo, P.; Ao, Q.

    1996-01-01

    Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a open-quotes black boxclose quotes mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments

  4. Spent Fuel Ratio Estimates from Numerical Models in ALE3D

    Energy Technology Data Exchange (ETDEWEB)

    Margraf, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-02

    Potential threat of intentional sabotage of spent nuclear fuel storage facilities is of significant importance to national security. Paramount is the study of focused energy attacks on these materials and the potential release of aerosolized hazardous particulates into the environment. Depleted uranium oxide (DUO2) is often chosen as a surrogate material for testing due to the unreasonable cost and safety demands for conducting full-scale tests with real spent nuclear fuel. To account for differences in mechanical response resulting in changes to particle distribution it is necessary to scale the DUO2 results to get a proper measure for spent fuel. This is accomplished with the spent fuel ratio (SFR), the ratio of respirable aerosol mass released due to identical damage conditions between a spent fuel and a surrogate material like depleted uranium oxide (DUO2). A very limited number of full-scale experiments have been carried out to capture this data, and the oft-questioned validity of the results typically leads to overly-conservative risk estimates. In the present work, the ALE3D hydrocode is used to simulate DUO2 and spent nuclear fuel pellets impacted by metal jets. The results demonstrate an alternative approach to estimate the respirable release fraction of fragmented nuclear fuel.

  5. Computationally efficient methods for digital control

    NARCIS (Netherlands)

    Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.

    2008-01-01

    The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these

  6. Study of variability in a virtual wedges A.L.E. Primus, measured weekly, for two years; Estudio de la variabilidad de las cunas virtuales en un A. L. E. Primus, medidas semanalmente, dure dos anos

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Segovia, J.; Ruiz Vazquez, M.; Carrera Magarino, F.

    2011-07-01

    We try to analyze the stability of virtual wedges in the daily control of an electron linear accelerator (ALE), measuring weekly each wedge angle and for the two photon energies available on the machine.

  7. Developing a multimodal biometric authentication system using soft computing methods.

    Science.gov (United States)

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  8. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  9. The neural basis of kinesthetic and visual imagery in sports: an ALE meta - analysis.

    Science.gov (United States)

    Filgueiras, Alberto; Quintas Conde, Erick Francisco; Hall, Craig R

    2017-12-19

    Imagery is a widely spread technique in the sport sciences that entails the mental rehearsal of a given situation to improve an athlete's learning, performance and motivation. Two modalities of imagery are reported to tap into distinct brain structures, but sharing common components: kinesthetic and visual imagery. This study aimed to investigate the neural basis of those types of imagery with Activation Likelihood Estimation algorithm to perform a meta - analysis. A systematic search was used to retrieve only experimental studies with athletes or sportspersons. Altogether, nine studies were selected and an ALE meta - analysis was performed. Results indicated significant activation of the premotor, somatosensory cortex, supplementary motor areas, inferior and superior parietal lobule, caudate, cingulate and cerebellum in both imagery tasks. It was concluded that visual and kinesthetic imagery share similar neural networks which suggests that combined interventions are beneficial to athletes whereas separate use of those two modalities of imagery may seem less efficient from a neuropsychological approach.

  10. Computer methods for transient fluid-structure analysis of nuclear reactors

    International Nuclear Information System (INIS)

    Belytschko, T.; Liu, W.K.

    1985-01-01

    Fluid-structure interaction problems in nuclear engineering are categorized according to the dominant physical phenomena and the appropriate computational methods. Linear fluid models that are considered include acoustic fluids, incompressible fluids undergoing small disturbances, and small amplitude sloshing. Methods available in general-purpose codes for these linear fluid problems are described. For nonlinear fluid problems, the major features of alternative computational treatments are reviewed; some special-purpose and multipurpose computer codes applicable to these problems are then described. For illustration, some examples of nuclear reactor problems that entail coupled fluid-structure analysis are described along with computational results

  11. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  12. Data analysis through interactive computer animation method (DATICAM)

    International Nuclear Information System (INIS)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process

  13. ‘Quadrilateral’ in Philosophy and Bie-modernism (Comments on Aleš Erjavec’s “Zhuyi: From Absence to Bustle? Some Comments on Wang Jianjiang's Article ‘The Bustle and the Absence of Zhuyi’”

    Directory of Open Access Journals (Sweden)

    Wang Jianjiang

    2017-09-01

    Full Text Available Aleš Erjavec proposed the global philosophical quadrilateral, giving Chinese philosophy, aesthetics, and humanities an expectation. However, the realization of this expectation hinges on the question whether Chinese philosophy as well as aesthetics and the humanities can rid themselves of the staggering level of ‘voice’ and develop their ‘speech’. To make ‘speech’, any nation should have its own idea, theory and Zhuyi. How to overcome the embarrassment that ‘quadrilateral’ expectation implies? Time spatialization and four-phase development theories of the Bie-modern, and great leap forward pause theory have provided an answer. The quadrilateral expectation as shown by Aleš Erjavec, is encountering the antagonism between ‘cosmopolitanism’ and ‘nationalism’. The key to resolving this antagonism is ‘my’ original achievement consisting of ‘Chinese traditional philosophy, Western philosophy, Marxism and I (myself’. Bie-modernism is a Zhuyi of self-regulation, self-renewal and self-transcendence and of their practical implementation.   Article received: May 21, 2017; Article accepted: May 24, 2017; Original scholarly paper How to cite this article: Jianjiang, Wang. "‘Quadrilateral’ in Philosophy and Bie-modernism (Comments on Aleš Erjavec’s “Zhuyi: From Absence to Bustle? Some Comments on Wang Jianjiang's Article ‘The Bustle and the Absence of Zhuyi’”." AM Journal of Art and Media Studies 13 (2017: 123-142. doi: 10.25038/am.v0i13.190

  14. An Augmented Fast Marching Method for Computing Skeletons and Centerlines

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2002-01-01

    We present a simple and robust method for computing skeletons for arbitrary planar objects and centerlines for 3D objects. We augment the Fast Marching Method (FMM) widely used in level set applications by computing the paramterized boundary location every pixel came from during the boundary

  15. Numerical computer methods part E

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.

  16. Diseño de una matriz de conmutación 8:1 para señales de microondas en banda X.

    OpenAIRE

    DRAGOMIR, ALIN OVIDIU

    2016-01-01

    En el presente trabajo se aborda el diseño, fabricación y validación experimental de una matriz de conmutación de 8 canales para la medida de señales de microondas. Para la realización de una implementación compacta y con elevados tiempos de conmutación, se ha optado por una implementación en tecnología de circuitos impresos (PCB) empleando switches de RF en tecnolotgía Ultra-CMOS. El dispositivo propuesto tendría múltiples aplicaciones, entre las que cabe destacar (dado que se encuentra en e...

  17. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  18. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  19. A method of paralleling computer calculation for two-dimensional kinetic plasma model

    International Nuclear Information System (INIS)

    Brazhnik, V.A.; Demchenko, V.V.; Dem'yanov, V.G.; D'yakov, V.E.; Ol'shanskij, V.V.; Panchenko, V.I.

    1987-01-01

    A method for parallel computer calculation and OSIRIS program complex realizing it and designed for numerical plasma simulation by the macroparticle method are described. The calculation can be carried out either with one or simultaneously with two computers BESM-6, that is provided by some package of interacting programs functioning in every computer. Program interaction in every computer is based on event techniques realized in OS DISPAK. Parallel computer calculation with two BESM-6 computers allows to accelerate the computation 1.5 times

  20. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  1. Three numerical methods for the computation of the electrostatic energy

    International Nuclear Information System (INIS)

    Poenaru, D.N.; Galeriu, D.

    1975-01-01

    The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended

  2. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    OpenAIRE

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  3. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  4. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  5. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  6. Permeability computation on a REV with an immersed finite element method

    International Nuclear Information System (INIS)

    Laure, P.; Puaux, G.; Silva, L.; Vincent, M.

    2011-01-01

    An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

  7. A hybrid method for the computation of quasi-3D seismograms.

    Science.gov (United States)

    Masson, Yder; Romanowicz, Barbara

    2013-04-01

    The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these

  8. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  9. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  10. Geometric optical transfer function and tis computation method

    International Nuclear Information System (INIS)

    Wang Qi

    1992-01-01

    Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation

  11. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  12. Variation in the Gender Gap in Inactive and Active Life Expectancy by the Definition of Inactivity Among Older Adults.

    Science.gov (United States)

    Malhotra, Rahul; Chan, Angelique; Ajay, Shweta; Ma, Stefan; Saito, Yasuhiko

    2016-10-01

    To assess variation in gender gap (female-male) in inactive life expectancy (IALE) and active life expectancy (ALE) by definition of inactivity. Inactivity, among older Singaporeans, was defined as follows: Scenario 1-health-related difficulty in activities of daily living (ADLs); Scenario 2-health-related difficulty in ADLs/instrumental ADLs (IADLs); Scenario 3-health-related difficulty in ADLs/IADLs or non-health-related non-performance of IADLs. Multistate life tables computed IALE and ALE at age 60, testing three hypotheses: In all scenarios, life expectancy, absolute and relative IALE, and absolute ALE are higher for females (Hypothesis 1 [H1]); gender gap in absolute and relative IALE expands, and in absolute ALE, it contracts in Scenario 2 versus 1 (Hypothesis 2 [H2]); gender gap in absolute and relative IALE decreases, and in absolute ALE, it increases in Scenario 3 versus 2 (Hypothesis 3 [H3]). H1 was supported in Scenarios 1 and 3 but not Scenario 2. Both H2 and H3 were supported. Definition of inactivity influences gender gap in IALE and ALE. © The Author(s) 2016.

  13. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  14. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  15. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  16. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  17. A Krylov Subspace Method for Unstructured Mesh SN Transport Computation

    International Nuclear Information System (INIS)

    Yoo, Han Jong; Cho, Nam Zin; Kim, Jong Woon; Hong, Ser Gi; Lee, Young Ouk

    2010-01-01

    Hong, et al., have developed a computer code MUST (Multi-group Unstructured geometry S N Transport) for the neutral particle transport calculations in three-dimensional unstructured geometry. In this code, the discrete ordinates transport equation is solved by using the discontinuous finite element method (DFEM) or the subcell balance methods with linear discontinuous expansion. In this paper, the conventional source iteration in the MUST code is replaced by the Krylov subspace method to reduce computing time and the numerical test results are given

  18. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  19. Fully consistent CFD methods for incompressible flow computations

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.

    2014-01-01

    Nowadays collocated grid based CFD methods are one of the most e_cient tools for computations of the ows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure...

  20. High performance computing and quantum trajectory method in CPU and GPU systems

    International Nuclear Information System (INIS)

    Wiśniewska, Joanna; Sawerwain, Marek; Leoński, Wiesław

    2015-01-01

    Nowadays, a dynamic progress in computational techniques allows for development of various methods, which offer significant speed-up of computations, especially those related to the problems of quantum optics and quantum computing. In this work, we propose computational solutions which re-implement the quantum trajectory method (QTM) algorithm in modern parallel computation environments in which multi-core CPUs and modern many-core GPUs can be used. In consequence, new computational routines are developed in more effective way than those applied in other commonly used packages, such as Quantum Optics Toolbox (QOT) for Matlab or QuTIP for Python

  1. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration

    2013-02-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  2. Computational methods for 2D materials: discovery, property characterization, and application design.

    Science.gov (United States)

    Paul, J T; Singh, A K; Dong, Z; Zhuang, H; Revard, B C; Rijal, B; Ashton, M; Linscheid, A; Blonsky, M; Gluhovic, D; Guo, J; Hennig, R G

    2017-11-29

    The discovery of two-dimensional (2D) materials comes at a time when computational methods are mature and can predict novel 2D materials, characterize their properties, and guide the design of 2D materials for applications. This article reviews the recent progress in computational approaches for 2D materials research. We discuss the computational techniques and provide an overview of the ongoing research in the field. We begin with an overview of known 2D materials, common computational methods, and available cyber infrastructures. We then move onto the discovery of novel 2D materials, discussing the stability criteria for 2D materials, computational methods for structure prediction, and interactions of monolayers with electrochemical and gaseous environments. Next, we describe the computational characterization of the 2D materials' electronic, optical, magnetic, and superconducting properties and the response of the properties under applied mechanical strain and electrical fields. From there, we move on to discuss the structure and properties of defects in 2D materials, and describe methods for 2D materials device simulations. We conclude by providing an outlook on the needs and challenges for future developments in the field of computational research for 2D materials.

  3. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  4. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  5. Computer-aided method for recognition of proton track in nuclear emulsion

    International Nuclear Information System (INIS)

    Ruan Jinlu; Li Hongyun; Song Jiwen; Zhang Jianfu; Chen Liang; Zhang Zhongbing; Liu Jinliang

    2014-01-01

    In order to overcome the shortcomings of the manual method for proton-recoil track recognition in nuclear emulsions, a computer-aided track recognition method was studied. In this method, image sequences captured by a microscope system were processed through image convolution with composite filters, binarization by multi thresholds, track grains clustering and redundant grains removing to recognize the track grains in the image sequences. Then the proton-recoil tracks were reconstructed from the recognized track grains through track reconstruction. The proton-recoil tracks in the nuclear emulsion irradiated by the neutron beam at energy of 14.9 MeV were recognized by the computer-aided method. The results show that proton-recoil tracks reconstructed by this method consist well with those reconstructed by the manual method. This compute-raided track recognition method lays an important technical foundation of developments of a proton-recoil track automatic recognition system and applications of nuclear emulsions in pulsed neutron spectrum measurement. (authors)

  6. Applications of meshless methods for damage computations with finite strains

    International Nuclear Information System (INIS)

    Pan Xiaofei; Yuan Huang

    2009-01-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis

  7. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1995-05-01

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  8. Computational methods of electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.

    1983-01-01

    A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated

  9. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.

  10. Diseño y construcción de un prototipo de electroencefalógrafo para adquisición de señales cerebrales

    OpenAIRE

    Villalba Calvillo, Guillermo

    2016-01-01

    El objetivo del presente proyecto es el desarrollo e implementación de un electroencefalógrafo de pequeñas dimensiones que permita la monitorización y registro de señales cerebrales. El sistema estará compuesto por una etapa analógica que se encargará de acondicionar la señal, una etapa digital, compuesta por un microcontrolador (DAQ) cuyo objetivo es digitalizar la señal que recibe de la etapa analógica, y una interfaz gráfica para visualizar dicha señal, donde se podrá procesar y analizar l...

  11. Comunicación con computador mediante señales cerebrales : aplicación a la tecnología de la rehabilitación

    OpenAIRE

    Martínez Pérez, Jose Luis

    2011-01-01

    Avances recientes en hardware para ordenadores personales y procesamiento de señal ha hecho posible el uso de señales EEG u ondas cerebrales para comunicación entre personas y computadores. Pacientes que sufren de síndromes bloqueantes disponen ahora de una nueva forma de comunicación con el resto del mundo, pero incluso con las más modernas técnicas, estos sistemas aún tienen tasas de comunicación del orden de 2-3 actividades / minuto. En suma, los dispositivos existentes no son diseñados co...

  12. Comunicación con computador mediante señales cerebrales : aplicación a la tecnología de la rehabilitación

    OpenAIRE

    Martínez Pérez, Jose Luis

    2010-01-01

    Avances recientes en hardware para ordenadores personales y procesamiento de señal ha hecho posible el uso de señales EEG u ondas cerebrales para comunicación entre personas y computadores. Pacientes que sufren de síndromes bloqueantes disponen ahora de una nueva forma de comunicación con el resto del mundo, pero incluso con las más modernas técnicas, estos sistemas aún tienen tasas de comunicación del orden de 2-3 actividades / minuto. En suma, los dispositivos existentes no son diseñados co...

  13. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  14. Prediction of intestinal absorption and blood-brain barrier penetration by computational methods.

    Science.gov (United States)

    Clark, D E

    2001-09-01

    This review surveys the computational methods that have been developed with the aim of identifying drug candidates likely to fail later on the road to market. The specifications for such computational methods are outlined, including factors such as speed, interpretability, robustness and accuracy. Then, computational filters aimed at predicting "drug-likeness" in a general sense are discussed before methods for the prediction of more specific properties--intestinal absorption and blood-brain barrier penetration--are reviewed. Directions for future research are discussed and, in concluding, the impact of these methods on the drug discovery process, both now and in the future, is briefly considered.

  15. Fluid Structure Interaction for Hydraulic Problems

    International Nuclear Information System (INIS)

    Souli, Mhamed; Aquelet, Nicolas

    2011-01-01

    Fluid Structure interaction plays an important role in engineering applications. Physical phenomena such as flow induced vibration in nuclear industry, fuel sloshing tank in automotive industry or rotor stator interaction in turbo machinery, can lead to structure deformation and sometimes to failure. In order to solve fluid structure interaction problems, the majority of numerical tests consists in using two different codes to separately solve pressure of the fluid and structural displacements. In this paper, a unique code with an ALE formulation approach is used to implicitly calculate the pressure of an incompressible fluid applied to the structure. The development of the ALE method as well as the coupling in a computational structural dynamic code, allows to solve more large industrial problems related to fluid structure coupling. (authors)

  16. Consequences of atomic layer etching on wafer scale uniformity in inductively coupled plasmas

    Science.gov (United States)

    Huard, Chad M.; Lanham, Steven J.; Kushner, Mark J.

    2018-04-01

    Atomic layer etching (ALE) typically divides the etching process into two self-limited reactions. One reaction passivates a single layer of material while the second preferentially removes the passivated layer. As such, under ideal conditions the wafer scale uniformity of ALE should be independent of the uniformity of the reactant fluxes onto the wafers, provided all surface reactions are saturated. The passivation and etch steps should individually asymptotically saturate after a characteristic fluence of reactants has been delivered to each site. In this paper, results from a computational investigation are discussed regarding the uniformity of ALE of Si in Cl2 containing inductively coupled plasmas when the reactant fluxes are both non-uniform and non-ideal. In the parameter space investigated for inductively coupled plasmas, the local etch rate for continuous processing was proportional to the ion flux. When operated with saturated conditions (that is, both ALE steps are allowed to self-terminate), the ALE process is less sensitive to non-uniformities in the incoming ion flux than continuous etching. Operating ALE in a sub-saturation regime resulted in less uniform etching. It was also found that ALE processing with saturated steps requires a larger total ion fluence than continuous etching to achieve the same etch depth. This condition may result in increased resist erosion and/or damage to stopping layers using ALE. While these results demonstrate that ALE provides increased etch depth uniformity, they do not show an improved critical dimension uniformity in all cases. These possible limitations to ALE processing, as well as increased processing time, will be part of the process optimization that includes the benefits of atomic resolution and improved uniformity.

  17. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  18. Evaluación del uso de señales visuales y de localización por el colibrí cola-ancha (Selasphorus platycercus al visitar flores de Penstemon roseus Evaluation of the use of visual and location cues by the Broad-tailed hummingbird (Selasphorus platycercusforaging in flowers of Penstemon roseus

    Directory of Open Access Journals (Sweden)

    Guillermo Pérez

    2012-03-01

    Full Text Available En los colibríes la memoria espacial desempeña un papel importante durante el forrajeo. Éste se basa en el uso de señales específicas (visuales o en señales espaciales (localización de flores y plantas con néctar. Sin embargo, el uso de estas señales por los colibríes puede variar de acuerdo con la escala espacial que enfrentan cuando visitan flores de una o más plantas durante el forrajeo; ésto se puso a prueba con individuos del colibrí cola-ancha Selasphorus platycercus. Por otro lado, para evaluar la posible variación en el uso de las señales, se llevaron a cabo experimentos en condiciones semi-naturales utilizando flores de la planta Penstemon roseus, nativa del sitio de estudio. A través de la manipulación de la presencia/ausencia de una recompensa (néctar y señales visuales, evaluamos el uso de la memoria espacial durante el forrajeo entre 2 plantas (experimento 1 y dentro de una sola planta (experimento 2. Los resultados demostraron que los colibríes utilizaron la memoria de localización de la planta de cuyas flores obtuvieron recompensa, independientemente de la presencia de señales visuales. Por el contrario, en flores individuales de una sola planta, después de un corto periodo de aprendizaje los colibríes pueden utilizar las señales visuales para guiar su forrajeo y discriminar las flores sin recompensa. Asimismo, en ausencia de señales visuales los individuos basaron su forrajeo en la memoria de localización de la flor con recompensa visitada previamente. Estos resultados sugieren plasticidad en el comportamiento de forrajeo de los colibríes influenciada por la escala espacial y por la información adquirida en visitas previas.In hummingbirds spatial memory plays an important role during foraging. It is based in use of specific cues (visual or spatial cues (location of flowers and plants with nectar. However, use of these cues by hummingbirds may change according to the spatial scale they face when visit

  19. L'endométriose périnéale profonde sur cicatrice d’épisiotomie: à propos d'un cas rare

    OpenAIRE

    Laadioui, Meriem; Alaoui, Fdili; Jayi, Sofia; Bouguern, Hakima; Chaara, Hikmat; Melhouf, Moulay Aabdelilah

    2013-01-01

    Parmi les localisations rares de l'endométriose sur cicatrice, celle du périnée demeure exceptionnelle, l'origine en est souvent iatrogène (épisiotomie). Nous rapportons le cas d'une patiente présentant une douleur cyclique, au niveau de la cicatrice d’épisiotomie. Avec à l'examen clinique une masse de 3,5 cm de grand diamètre au niveau de la cicatrice d’épisiotomie. L’écho périnéale a objectivé une image hypoéchgène hétérogène non vascularisée en regard de la cicatrice d’épisiotomie faisant ...

  20. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  1. Computational methods for constructing protein structure models from 3D electron microscopy maps.

    Science.gov (United States)

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-10-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. The asymptotic expansion method via symbolic computation

    OpenAIRE

    Navarro, Juan F.

    2012-01-01

    This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  3. Platform-independent method for computer aided schematic drawings

    Science.gov (United States)

    Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  4. A-VCI: A flexible method to efficiently compute vibrational spectra

    Science.gov (United States)

    Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2017-06-01

    The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm-1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm-1 is the most accurate computation that exists today on such systems.

  5. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  6. Computational Methods in Stochastic Dynamics Volume 2

    CERN Document Server

    Stefanou, George; Papadopoulos, Vissarion

    2013-01-01

    The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology.   This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...

  7. Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics

    Science.gov (United States)

    Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.

    2006-06-01

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.

  8. Modeling NIF Experimental Designs with Adaptive Mesh Refinement and Lagrangian Hydrodynamics

    International Nuclear Information System (INIS)

    Koniges, A E; Anderson, R W; Wang, P; Gunney, B N; Becker, R; Eder, D C; MacGowan, B J

    2005-01-01

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs

  9. Modeling Nif experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics

    International Nuclear Information System (INIS)

    Koniges, A.E.; Anderson, R.W.; Wang, P.; Gunney, B.T.N.; Becker, R.; Eder, D.C.; MacGowan, B.J.; Schneider, M.B.

    2006-01-01

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs. (authors)

  10. The Asymptotic Expansion Method via Symbolic Computation

    Directory of Open Access Journals (Sweden)

    Juan F. Navarro

    2012-01-01

    Full Text Available This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  11. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  12. Electron beam treatment planning: A review of dose computation methods

    International Nuclear Information System (INIS)

    Mohan, R.; Riley, R.; Laughlin, J.S.

    1983-01-01

    Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed

  13. A numerical method to compute interior transmission eigenvalues

    International Nuclear Information System (INIS)

    Kleefeld, Andreas

    2013-01-01

    In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber–Krahn type inequalities for larger transmission eigenvalues that are not yet available. (paper)

  14. Mathematical optics classical, quantum, and computational methods

    CERN Document Server

    Lakshminarayanan, Vasudevan

    2012-01-01

    Going beyond standard introductory texts, Mathematical Optics: Classical, Quantum, and Computational Methods brings together many new mathematical techniques from optical science and engineering research. Profusely illustrated, the book makes the material accessible to students and newcomers to the field. Divided into six parts, the text presents state-of-the-art mathematical methods and applications in classical optics, quantum optics, and image processing. Part I describes the use of phase space concepts to characterize optical beams and the application of dynamic programming in optical wave

  15. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  16. A method of non-contact reading code based on computer vision

    Science.gov (United States)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  17. Análisis acústico sobre señales de auscultación digital para la detección de soplos cardíacos

    OpenAIRE

    CASTAÑO, ANDRÉS M.; DELGADO T., EDILSON; GODINO, JUANI; CASTELLANOS, GERMÁN

    2009-01-01

    Se presenta la metodología basada en el análisis acústico de señales fonocardiográficas (FCG) par a detectar soplos cardíacos. En primer lugar se desarrolla un sistema de filtración basado en la transformada wavelet para reducir las perturbaciones que usualmente se presentan en la etapa de adquisición, ajustando la calidad del sonido de acuerdo a los requerimientos clínicos y validados por especialistas en semiología. Se propone un algoritmo de segmentación basado en la energía promedio norm...

  18. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  19. Moving finite elements: A continuously adaptive method for computational fluid dynamics

    International Nuclear Information System (INIS)

    Glasser, A.H.; Miller, K.; Carlson, N.

    1991-01-01

    Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor fabrication, biophysics, automobile and aircraft design, industrial fluid processing, chemical engineering, and combustion research. The improvements made possible by the new method could thus have substantial economic impact. Moving Finite Elements is a moving node adaptive grid method which has a tendency to pack the grid finely in regions where it is most needed at each time and to leave it coarse elsewhere. It does so in a manner which is simple and automatic, and does not require a large amount of human ingenuity to apply it to each particular problem. At the same time, it often allows the time step to be large enough to advance a moving shock by many shock thicknesses in a single time step, moving the grid smoothly with the solution and minimizing the number of time steps required for the whole problem. For 2D problems (two spatial variables) the grid is composed of irregularly shaped and irregularly connected triangles which are very flexible in their ability to adapt to the evolving solution. While other adaptive grid methods have been developed which share some of these desirable properties, this is the only method which combines them all. In many cases, the method can save orders of magnitude of computing time, equivalent to several generations of advancing computer hardware

  20. Investigation of volcanic gas analyses and magma outgassing from Erta' Ale lava lake, Afar, Ethiopia

    Energy Technology Data Exchange (ETDEWEB)

    Gerlach, T.M.

    1980-05-01

    The analyses of 18 volcanic gas samples collected over a two-hour period at 1075/sup 0/C from Erta' Ale lava lake in December 1971 and of 18 samples taken over a half-hour period at 1125 to 1135/sup 0/C in 1974 display moderately to intensely variable compositions. These variations result from imposed modifications caused by (1) atmospheric contamination and oxidation, (2) condensation and re-evaporation of water during collection, (3) analytical errors, and (4) chemical reactions between the erupted gases and a steel lead-in tube. Detailed examinations of the analyses indicate the erupted gases were at chemical equilibrium before collection. This condition was partially destroyed by the imposed modifications. High-temperature reaction equilibria were more completely preserved in the 1974 samples. Numerical procedures based on thermodynamic calculations have been used to restore each analysis to a composition representative of the erupted gases. These procedures have also been used to restore the anhydrous mean compositions reported for two series of collections taken at the lava lake in January 1973.

  1. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, Musa

    1998-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods

  2. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.

    1997-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods. (author)

  3. Delamination detection using methods of computational intelligence

    Science.gov (United States)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  4. Subtraction method of computing QCD jet cross sections at NNLO accuracy

    Science.gov (United States)

    Trócsányi, Zoltán; Somogyi, Gábor

    2008-10-01

    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

  5. Subtraction method of computing QCD jet cross sections at NNLO accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Trocsanyi, Zoltan [University of Debrecen and Institute of Nuclear Research of the Hungarian Academy of Sciences, H-4001 Debrecen P.O.Box 51 (Hungary)], E-mail: Zoltan.Trocsanyi@cern.ch; Somogyi, Gabor [University of Zuerich, Winterthurerstrasse 190, CH-8057 Zuerich (Switzerland)], E-mail: sgabi@physik.unizh.ch

    2008-10-15

    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

  6. Vectorization on the star computer of several numerical methods for a fluid flow problem

    Science.gov (United States)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  7. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  8. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  9. A systematic and efficient method to compute multi-loop master integrals

    Science.gov (United States)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  10. Computational methods in metabolic engineering for strain design.

    Science.gov (United States)

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Development of computational methods of design by analysis for pressure vessel components

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin

    2005-01-01

    Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)

  12. [Autism spectrum disorder and evaluation of perceived stress parents and professionals: Study of the psychometric properties of a French adaptation of the Appraisal of Life Event Scale (ALES-vf)].

    Science.gov (United States)

    Cappe, É; Poirier, N; Boujut, É; Nader-Grosbois, N; Dionne, C; Boulard, A

    2017-08-01

    Autism and related disorders are grouped into the category of « Autism Spectrum Disorder » (ASD) in the DSM-5. This appellation reflects the idea of a dimensional representation of autism that combines symptoms and characteristics that vary in severity and intensity. Despite common characteristics, there are varying degrees in intensity and in the onset of symptoms, ranging from a disability that can be very heavy with a total lack of communication and major disorders associated with the existence of a relative autonomy associated, sometimes, with extraordinary intellectual abilities. Parents are faced with several difficult situations, such as sleep disturbances, agitation, shouting, hetero violence, self-harm, learning difficulties, stereotyping, lack of social and emotional reciprocity, inappropriate behavior, etc. They can feel helpless and may experience stress related to these developmental and behavioral difficulties. The heterogeneity of symptoms, the presence of behavioral problems, the lack of reciprocity and autonomy also represent a challenge for practitioners in institutions and teachers at school. The objective of this research is to present the validation of a French translation of the Appraisal of Life Events Scale (ALES-vf) from Ferguson, Matthex and Cox, specifically adapted to the context of ASD. ALES was originally developed to operationalize the three dimensions of perceived stress (threat, loss and challenge) described by Lazarus and Folkman. ALES-vf was initially translated into French and adapted to the situation of parents of children with ASD. It was subsequently administered to 343 parents, 150 paramedical professionals involved with people with ASD, and 155 teachers from an ordinary school environment and from specialized schools, welcoming in their classroom at least one child with ASD. An exploratory factor analysis performed on data from 170 parents highlighted two exploratory models with four and three factors, slightly different

  13. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-01

    research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing

  14. Multiscale methods in computational fluid and solid mechanics

    NARCIS (Netherlands)

    Borst, de R.; Hulshoff, S.J.; Lenz, S.; Munts, E.A.; Brummelen, van E.H.; Wall, W.; Wesseling, P.; Onate, E.; Periaux, J.

    2006-01-01

    First, an attempt is made towards gaining a more systematic understanding of recent progress in multiscale modelling in computational solid and fluid mechanics. Sub- sequently, the discussion is focused on variational multiscale methods for the compressible and incompressible Navier-Stokes

  15. Dramaturgia wiersza: „wiersz-płacz”. Płakała w nocy, ale nie jej płacz go zbudził

    Directory of Open Access Journals (Sweden)

    Anna Krajewska

    2016-12-01

    Full Text Available The article is an interpretation of the poem “Płakała w nocy, ale nie jej płacz go zbudził” (“She cried at night, but not her cries woke him” written by Stanisław Barańczak. The author focuses on description in this poem – dramatic in shape, philosophical, and according to the narrative of the anthropocentrism crisis. The author considers this poem to be a metaphysical text defining the relationship between the world of human and non-human reality. She compares the poem with those written by Jan Kochanowski (Laments, Cyprian Kamil Norwid (In Verona, Andrew Marvell (Eyes and tears, Szymborska (Apple tree.

  16. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    Science.gov (United States)

    Jia, Meng; Fan, Yang-Yu; Tian, Wei-Jian

    2011-03-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 60872159).

  17. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    International Nuclear Information System (INIS)

    Jia Meng; Fan Yang-Yu; Tian Wei-Jian

    2011-01-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  18. Señales de valor de marca de las franquicias en México. Su efecto en el crecimiento del sistema franquiciador

    Directory of Open Access Journals (Sweden)

    Jannett Ayup-González

    2014-04-01

    Full Text Available La valoración de marca en la franquicia es aún insuficiente. Estos negocios estimulan el desarrollo económico de los países emergentes como México. El propósito de este trabajo es analizar las señales de valor de las marcas de franquicias que impulsaron el crecimiento de establecimientos de 2002 a 2008. Se empleó la metodología de datos de panel con 911 firmas operativas en el sistema franquiciador mexicano. Los resultados reflejan un efecto de endogeneidad y crecimiento negativo del sector. La decisión de franquiciar tuvo en cuenta la situación económica y el tamaño entre otros aspectos que confirman los argumentos teóricos.

  19. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.

    Science.gov (United States)

    Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter

    2015-08-24

    We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

  20. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  1. A direct Arbitrary-Lagrangian-Eulerian ADER-WENO finite volume scheme on unstructured tetrahedral meshes for conservative and non-conservative hyperbolic systems in 3D

    Science.gov (United States)

    Boscheri, Walter; Dumbser, Michael

    2014-10-01

    In this paper we present a new family of high order accurate Arbitrary-Lagrangian-Eulerian (ALE) one-step ADER-WENO finite volume schemes for the solution of nonlinear systems of conservative and non-conservative hyperbolic partial differential equations with stiff source terms on moving tetrahedral meshes in three space dimensions. A WENO reconstruction technique is used to achieve high order of accuracy in space, while an element-local space-time Discontinuous Galerkin finite element predictor on moving curved meshes is used to obtain a high order accurate one-step time discretization. Within the space-time predictor the physical element is mapped onto a reference element using a high order isoparametric approach, where the space-time basis and test functions are given by the Lagrange interpolation polynomials passing through a predefined set of space-time nodes. Since our algorithm is cell-centered, the final mesh motion is computed by using a suitable node solver algorithm. A rezoning step as well as a flattener strategy are used in some of the test problems to avoid mesh tangling or excessive element deformations that may occur when the computation involves strong shocks or shear waves. The ALE algorithm presented in this article belongs to the so-called direct ALE methods because the final Lagrangian finite volume scheme is based directly on a space-time conservation formulation of the governing PDE system, with the rezoned geometry taken already into account during the computation of the fluxes. We apply our new high order unstructured ALE schemes to the 3D Euler equations of compressible gas dynamics, for which a set of classical numerical test problems has been solved and for which convergence rates up to sixth order of accuracy in space and time have been obtained. We furthermore consider the equations of classical ideal magnetohydrodynamics (MHD) as well as the non-conservative seven-equation Baer-Nunziato model of compressible multi-phase flows with

  2. Fluid-Induced Vibration Analysis for Reactor Internals Using Computational FSI Method

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Jong Sung; Yi, Kun Woo; Sung, Ki Kwang; Im, In Young; Choi, Taek Sang [KEPCO E and C, Daejeon (Korea, Republic of)

    2013-10-15

    This paper introduces a fluid-induced vibration analysis method which calculates the response of the RVI to both deterministic and random loads at once and utilizes more realistic pressure distribution using the computational Fluid Structure Interaction (FSI) method. As addressed above, the FIV analysis for the RVI was carried out using the computational FSI method. This method calculates the response to deterministic and random turbulence loads at once. This method is also a simple and integrative method to get structural dynamic responses of reactor internals to various flow-induced loads. Because the analysis of this paper omitted the bypass flow region and Inner Barrel Assembly (IBA) due to the limitation of computer resources, it is necessary to find an effective way to consider all regions in the RV for the FIV analysis in the future. Reactor coolant flow makes Reactor Vessel Internals (RVI) vibrate and may affect the structural integrity of them. U. S. NRC Regulatory Guide 1.20 requires the Comprehensive Vibration Assessment Program (CVAP) to verify the structural integrity of the RVI for Fluid-Induced Vibration (FIV). The hydraulic forces on the RVI of OPR1000 and APR1400 were computed from the hydraulic formulas and the CVAP measurements in Palo Verde Unit 1 and Yonggwang Unit 4 for the structural vibration analyses. In this method, the hydraulic forces were divided into deterministic and random turbulence loads and were used for the excitation forces of the separate structural analyses. These forces are applied to the finite element model and the responses to them were combined into the resultant stresses.

  3. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M.F.; Ethier, S.; Wichmann, N.

    2009-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores.

  4. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M F; Ethier, S; Wichmann, N

    2007-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores

  5. Short-term electric load forecasting using computational intelligence methods

    OpenAIRE

    Jurado, Sergio; Peralta, J.; Nebot, Àngela; Mugica, Francisco; Cortez, Paulo

    2013-01-01

    Accurate time series forecasting is a key issue to support individual and organizational decision making. In this paper, we introduce several methods for short-term electric load forecasting. All the presented methods stem from computational intelligence techniques: Random Forest, Nonlinear Autoregressive Neural Networks, Evolutionary Support Vector Machines and Fuzzy Inductive Reasoning. The performance of the suggested methods is experimentally justified with several experiments carried out...

  6. Computational method for free surface hydrodynamics

    International Nuclear Information System (INIS)

    Hirt, C.W.; Nichols, B.D.

    1980-01-01

    There are numerous flow phenomena in pressure vessel and piping systems that involve the dynamics of free fluid surfaces. For example, fluid interfaces must be considered during the draining or filling of tanks, in the formation and collapse of vapor bubbles, and in seismically shaken vessels that are partially filled. To aid in the analysis of these types of flow phenomena, a new technique has been developed for the computation of complicated free-surface motions. This technique is based on the concept of a local average volume of fluid (VOF) and is embodied in a computer program for two-dimensional, transient fluid flow called SOLA-VOF. The basic approach used in the VOF technique is briefly described, and compared to other free-surface methods. Specific capabilities of the SOLA-VOF program are illustrated by generic examples of bubble growth and collapse, flows of immiscible fluid mixtures, and the confinement of spilled liquids

  7. A systematic and efficient method to compute multi-loop master integrals

    Directory of Open Access Journals (Sweden)

    Xiao Liu

    2018-04-01

    Full Text Available We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  8. Advanced soft computing diagnosis method for tumour grading.

    Science.gov (United States)

    Papageorgiou, E I; Spyridonos, P P; Stylios, C D; Ravazoula, P; Groumpos, P P; Nikiforidis, G N

    2006-01-01

    To develop an advanced diagnostic method for urinary bladder tumour grading. A novel soft computing modelling methodology based on the augmentation of fuzzy cognitive maps (FCMs) with the unsupervised active Hebbian learning (AHL) algorithm is applied. One hundred and twenty-eight cases of urinary bladder cancer were retrieved from the archives of the Department of Histopathology, University Hospital of Patras, Greece. All tumours had been characterized according to the classical World Health Organization (WHO) grading system. To design the FCM model for tumour grading, three experts histopathologists defined the main histopathological features (concepts) and their impact on grade characterization. The resulted FCM model consisted of nine concepts. Eight concepts represented the main histopathological features for tumour grading. The ninth concept represented the tumour grade. To increase the classification ability of the FCM model, the AHL algorithm was applied to adjust the weights of the FCM. The proposed FCM grading model achieved a classification accuracy of 72.5%, 74.42% and 95.55% for tumours of grades I, II and III, respectively. An advanced computerized method to support tumour grade diagnosis decision was proposed and developed. The novelty of the method is based on employing the soft computing method of FCMs to represent specialized knowledge on histopathology and on augmenting FCMs ability using an unsupervised learning algorithm, the AHL. The proposed method performs with reasonably high accuracy compared to other existing methods and at the same time meets the physicians' requirements for transparency and explicability.

  9. Improved computation method in residual life estimation of structural components

    Directory of Open Access Journals (Sweden)

    Maksimović Stevan M.

    2013-01-01

    Full Text Available This work considers the numerical computation methods and procedures for the fatigue crack growth predicting of cracked notched structural components. Computation method is based on fatigue life prediction using the strain energy density approach. Based on the strain energy density (SED theory, a fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks. The model is based on an equation expressed in terms of low cycle fatigue parameters. Attention is focused on crack growth analysis of structural components under variable amplitude loads. Crack growth is largely influenced by the effect of the plastic zone at the front of the crack. To obtain efficient computation model plasticity-induced crack closure phenomenon is considered during fatigue crack growth. The use of the strain energy density method is efficient for fatigue crack growth prediction under cyclic loading in damaged structural components. Strain energy density method is easy for engineering applications since it does not require any additional determination of fatigue parameters (those would need to be separately determined for fatigue crack propagation phase, and low cyclic fatigue parameters are used instead. Accurate determination of fatigue crack closure has been a complex task for years. The influence of this phenomenon can be considered by means of experimental and numerical methods. Both of these models are considered. Finite element analysis (FEA has been shown to be a powerful and useful tool1,6 to analyze crack growth and crack closure effects. Computation results are compared with available experimental results. [Projekat Ministarstva nauke Republike Srbije, br. OI 174001

  10. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    International Nuclear Information System (INIS)

    Norris, Edward T.; Liu, Xin; Hsieh, Jiang

    2015-01-01

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer

  11. Efficient method for computing the electronic transport properties of a multiterminal system

    Science.gov (United States)

    Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio

    2018-04-01

    We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.

  12. Particular application of methods of AdaBoost and LBP to the problems of computer vision

    OpenAIRE

    Волошин, Микола Володимирович

    2012-01-01

    The application of AdaBoost method and local binary pattern (LBP) method for different spheres of computer vision implementation, such as personality identification and computer iridology, is considered in the article. The goal of the research is to develop error-correcting methods and systems for implements of computer vision and computer iridology, in particular. This article considers the problem of colour spaces, which are used as a filter and as a pre-processing of images. Method of AdaB...

  13. Fluid history computation methods for reactor safeguards problems using MNODE computer program

    International Nuclear Information System (INIS)

    Huang, Y.S.; Savery, C.W.

    1976-10-01

    A method for predicting the pressure-temperature histories of air, water liquid, and vapor flowing in a zoned containment as a result of high energy pipe rupture is described. The computer code, MNODE, has been developed for 12 connected control volumes and 24 inertia flow paths. Predictions by the code are compared with the results of an analytical gas dynamic problem, semiscale blowdown experiments, full scale MARVIKEN test results, Battelle-Frankfurt model PWR containment test data. The MNODE solutions to NRC/AEC subcompartment benchmark problems are also compared with results predicted by other computer codes such as RELAP-3, FLASH-2, CONTEMPT-PS. The analytical consideration is consistent with Section 6.2.1.2 of the Standard Format (Rev. 2) issued by U.S. Nuclear Regulatory Commission in September 1975

  14. A finite element method for flow problems in blast loading

    International Nuclear Information System (INIS)

    Forestier, A.; Lepareux, M.

    1984-06-01

    This paper presents a numerical method which describes fast dynamic problems in flow transient situations as in nuclear plants. A finite element formulation has been chosen; it is described by a preprocessor in CASTEM system: GIBI code. For these typical flow problems, an A.L.E. formulation for physical equations is used. So, some applications are presented: the well known problem of shock tube, the same one in 2D case and a last application to hydrogen detonation

  15. Pair Programming as a Modern Method of Teaching Computer Science

    Directory of Open Access Journals (Sweden)

    Irena Nančovska Šerbec

    2008-10-01

    Full Text Available At the Faculty of Education, University of Ljubljana we educate future computer science teachers. Beside didactical, pedagogical, mathematical and other interdisciplinary knowledge, students gain knowledge and skills of programming that are crucial for computer science teachers. For all courses, the main emphasis is the absorption of professional competences, related to the teaching profession and the programming profile. The latter are selected according to the well-known document, the ACM Computing Curricula. The professional knowledge is therefore associated and combined with the teaching knowledge and skills. In the paper we present how to achieve competences related to programming by using different didactical models (semiotic ladder, cognitive objectives taxonomy, problem solving and modern teaching method “pair programming”. Pair programming differs from standard methods (individual work, seminars, projects etc.. It belongs to the extreme programming as a discipline of software development and is known to have positive effects on teaching first programming language. We have experimentally observed pair programming in the introductory programming course. The paper presents and analyzes the results of using this method: the aspects of satisfaction during programming and the level of gained knowledge. The results are in general positive and demonstrate the promising usage of this teaching method.

  16. Computing homography with RANSAC algorithm: a novel method of registration

    Science.gov (United States)

    Li, Xiaowei; Liu, Yue; Wang, Yongtian; Yan, Dayuan

    2005-02-01

    An AR (Augmented Reality) system can integrate computer-generated objects with the image sequences of real world scenes in either an off-line or a real-time way. Registration, or camera pose estimation, is one of the key techniques to determine its performance. The registration methods can be classified as model-based and move-matching. The former approach can accomplish relatively accurate registration results, but it requires the precise model of the scene, which is hard to be obtained. The latter approach carries out registration by computing the ego-motion of the camera. Because it does not require the prior-knowledge of the scene, its registration results sometimes turn out to be less accurate. When the model defined is as simple as a plane, a mixed method is introduced to take advantages of the virtues of the two methods mentioned above. Although unexpected objects often occlude this plane in an AR system, one can still try to detect corresponding points with a contract-expand method, while this will import erroneous correspondences. Computing homography with RANSAC algorithm is used to overcome such shortcomings. Using the robustly estimated homography resulted from RANSAC, the camera projective matrix can be recovered and thus registration is accomplished even when the markers are lost in the scene.

  17. Method and computer program product for maintenance and modernization backlogging

    Science.gov (United States)

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  18. Recent advances in computational methods and clinical applications for spine imaging

    CERN Document Server

    Glocker, Ben; Klinder, Tobias; Li, Shuo

    2015-01-01

    This book contains the full papers presented at the MICCAI 2014 workshop on Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together scientists and clinicians in the field of computational spine imaging. The chapters included in this book present and discuss the new advances and challenges in these fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modeling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis. The book also includes papers and reports from the first challenge on vertebra segmentation held at the workshop.

  19. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  20. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  1. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.; Bajic, Vladimir B.

    2016-01-01

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  2. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.

    2016-01-06

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  3. A high-resolution computational localization method for transcranial magnetic stimulation mapping.

    Science.gov (United States)

    Aonuma, Shinta; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa; Takakura, Tomokazu; Tamura, Manabu; Muragaki, Yoshihiro

    2018-05-15

    Transcranial magnetic stimulation (TMS) is used for the mapping of brain motor functions. The complexity of the brain deters determining the exact localization of the stimulation site using simplified methods (e.g., the region below the center of the TMS coil) or conventional computational approaches. This study aimed to present a high-precision localization method for a specific motor area by synthesizing computed non-uniform current distributions in the brain for multiple sessions of TMS. Peritumoral mapping by TMS was conducted on patients who had intra-axial brain neoplasms located within or close to the motor speech area. The electric field induced by TMS was computed using realistic head models constructed from magnetic resonance images of patients. A post-processing method was implemented to determine a TMS hotspot by combining the computed electric fields for the coil orientations and positions that delivered high motor-evoked potentials during peritumoral mapping. The method was compared to the stimulation site localized via intraoperative direct brain stimulation and navigated TMS. Four main results were obtained: 1) the dependence of the computed hotspot area on the number of peritumoral measurements was evaluated; 2) the estimated localization of the hand motor area in eight non-affected hemispheres was in good agreement with the position of a so-called "hand-knob"; 3) the estimated hotspot areas were not sensitive to variations in tissue conductivity; and 4) the hand motor areas estimated by this proposal and direct electric stimulation (DES) were in good agreement in the ipsilateral hemisphere of four glioma patients. The TMS localization method was validated by well-known positions of the "hand-knob" in brains for the non-affected hemisphere, and by a hotspot localized via DES during awake craniotomy for the tumor-containing hemisphere. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Multigrid methods for the computation of propagators in gauge fields

    International Nuclear Information System (INIS)

    Kalkreuter, T.

    1992-11-01

    In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. We discuss proper averaging operations for bosons and for staggered fermions. An efficient algorithm for computing C numerically is presented. The averaging kernels C can be used not only in deterministic multigrid computations, but also in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies of gauge theories. Actual numerical computations of kernels and propagators are performed in compact four-dimensional SU(2) gauge fields. (orig./HSI)

  5. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  6. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  7. Interfaz humano-computadora basada en señales de electrooculografía para personas con discapacidad motriz

    Directory of Open Access Journals (Sweden)

    Daniel Pacheco Bautista

    2014-05-01

    Full Text Available En este trabajo se presenta el desarrollo de un prototipo que asiste, a personas con cierta discapacidad motriz, en la interacción con la computadora de una forma simple y económica, mediante señales de electrooculografía. Esta técnica permite detectar los movimientos oculares basada en el registro de la diferencia de potencial existente entre la córnea y la retina, tal propiedad es aprovechada en este proyecto para controlar el desplazamiento del cursor del ratón de una forma precisa sobre la pantalla de la computadora. El prototipo es un diseño compacto alimentado de una fuente única de 5V proveniente del puerto USB y utiliza la circuitería ya implementada en cualquier ratón electromecánico convencional con mínimas modificaciones. El uso de tales dispositivos así como de electrodos convencionales hace el producto de un costo relativamente bajo en relación a las propuestas en otros trabajos.

  8. Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics

    International Nuclear Information System (INIS)

    Luo, Hong; Xia, Yidong; Nourgaliev, Robert

    2011-01-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)

  9. Practical methods to improve the development of computational software

    International Nuclear Information System (INIS)

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-01-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  10. USING COMPUTER-BASED TESTING AS ALTERNATIVE ASSESSMENT METHOD OF STUDENT LEARNING IN DISTANCE EDUCATION

    Directory of Open Access Journals (Sweden)

    Amalia SAPRIATI

    2010-04-01

    Full Text Available This paper addresses the use of computer-based testing in distance education, based on the experience of Universitas Terbuka (UT, Indonesia. Computer-based testing has been developed at UT for reasons of meeting the specific needs of distance students as the following: Ø students’ inability to sit for the scheduled test, Ø conflicting test schedules, and Ø students’ flexibility to take examination to improve their grades. In 2004, UT initiated a pilot project in the development of system and program for computer-based testing method. Then in 2005 and 2006 tryouts in the use of computer-based testing methods were conducted in 7 Regional Offices that were considered as having sufficient supporting recourses. The results of the tryouts revealed that students were enthusiastic in taking computer-based tests and they expected that the test method would be provided by UT as alternative to the traditional paper and pencil test method. UT then implemented computer-based testing method in 6 and 12 Regional Offices in 2007 and 2008 respectively. The computer-based testing was administered in the city of the designated Regional Office and was supervised by the Regional Office staff. The development of the computer-based testing was initiated with conducting tests using computers in networked configuration. The system has been continually improved, and it currently uses devices linked to the internet or the World Wide Web. The construction of the test involves the generation and selection of the test items from the item bank collection of the UT Examination Center. Thus the combination of the selected items compromises the test specification. Currently UT has offered 250 courses involving the use of computer-based testing. Students expect that more courses are offered with computer-based testing in Regional Offices within easy access by students.

  11. An Adaptive Reordered Method for Computing PageRank

    Directory of Open Access Journals (Sweden)

    Yi-Ming Bu

    2013-01-01

    Full Text Available We propose an adaptive reordered method to deal with the PageRank problem. It has been shown that one can reorder the hyperlink matrix of PageRank problem to calculate a reduced system and get the full PageRank vector through forward substitutions. This method can provide a speedup for calculating the PageRank vector. We observe that in the existing reordered method, the cost of the recursively reordering procedure could offset the computational reduction brought by minimizing the dimension of linear system. With this observation, we introduce an adaptive reordered method to accelerate the total calculation, in which we terminate the reordering procedure appropriately instead of reordering to the end. Numerical experiments show the effectiveness of this adaptive reordered method.

  12. Intravenous catheter training system: computer-based education versus traditional learning methods.

    Science.gov (United States)

    Engum, Scott A; Jeffries, Pamela; Fisher, Lisa

    2003-07-01

    Virtual reality simulators allow trainees to practice techniques without consequences, reduce potential risk associated with training, minimize animal use, and help to develop standards and optimize procedures. Current intravenous (IV) catheter placement training methods utilize plastic arms, however, the lack of variability can diminish the educational stimulus for the student. This study compares the effectiveness of an interactive, multimedia, virtual reality computer IV catheter simulator with a traditional laboratory experience of teaching IV venipuncture skills to both nursing and medical students. A randomized, pretest-posttest experimental design was employed. A total of 163 participants, 70 baccalaureate nursing students and 93 third-year medical students beginning their fundamental skills training were recruited. The students ranged in age from 20 to 55 years (mean 25). Fifty-eight percent were female and 68% percent perceived themselves as having average computer skills (25% declaring excellence). The methods of IV catheter education compared included a traditional method of instruction involving a scripted self-study module which involved a 10-minute videotape, instructor demonstration, and hands-on-experience using plastic mannequin arms. The second method involved an interactive multimedia, commercially made computer catheter simulator program utilizing virtual reality (CathSim). The pretest scores were similar between the computer and the traditional laboratory group. There was a significant improvement in cognitive gains, student satisfaction, and documentation of the procedure with the traditional laboratory group compared with the computer catheter simulator group. Both groups were similar in their ability to demonstrate the skill correctly. CONCLUSIONS; This evaluation and assessment was an initial effort to assess new teaching methodologies related to intravenous catheter placement and their effects on student learning outcomes and behaviors

  13. An efficient method for computing the absorption of solar radiation by water vapor

    Science.gov (United States)

    Chou, M.-D.; Arking, A.

    1981-01-01

    Chou and Arking (1980) have developed a fast but accurate method for computing the IR cooling rate due to water vapor. Using a similar approach, the considered investigation develops a method for computing the heating rates due to the absorption of solar radiation by water vapor in the wavelength range from 4 to 8.3 micrometers. The validity of the method is verified by comparison with line-by-line calculations. An outline is provided of an efficient method for transmittance and flux computations based upon actual line parameters. High speed is achieved by employing a one-parameter scaling approximation to convert an inhomogeneous path into an equivalent homogeneous path at suitably chosen reference conditions.

  14. A fast computation method for MUSIC spectrum function based on circular arrays

    Science.gov (United States)

    Du, Zhengdong; Wei, Ping

    2015-02-01

    The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.

  15. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  16. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  17. Advanced computational tools and methods for nuclear analyses of fusion technology systems

    International Nuclear Information System (INIS)

    Fischer, U.; Chen, Y.; Pereslavtsev, P.; Simakov, S.P.; Tsige-Tamirat, H.; Loughlin, M.; Perel, R.L.; Petrizzi, L.; Tautges, T.J.; Wilson, P.P.H.

    2005-01-01

    An overview is presented of advanced computational tools and methods developed recently for nuclear analyses of Fusion Technology systems such as the experimental device ITER ('International Thermonuclear Experimental Reactor') and the intense neutron source IFMIF ('International Fusion Material Irradiation Facility'). These include Monte Carlo based computational schemes for the calculation of three-dimensional shut-down dose rate distributions, methods, codes and interfaces for the use of CAD geometry models in Monte Carlo transport calculations, algorithms for Monte Carlo based sensitivity/uncertainty calculations, as well as computational techniques and data for IFMIF neutronics and activation calculations. (author)

  18. Soft Computing Methods for Disulfide Connectivity Prediction.

    Science.gov (United States)

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  19. SEÑALES DE INVERSIÓN BASADAS EN UN ÍNDICE DE AVERSIÓN AL RIESGO

    Directory of Open Access Journals (Sweden)

    Gómez Martínez, Raúl

    2013-09-01

    Full Text Available Las estadísticas de búsquedas en internet constituyen una herramienta que cada vez tiene más peso en la investigación de las ciencias sociales. En este artículo proponemos utilizar las estadísticas de búsquedas en internet, obtenidas a través de la herramienta Google Insights, como indicador del estado de confianza o aversión al riesgo de los inversores. Con esta información elaboramos un índice de aversión al riesgo (IAR a partir del volumen de búsquedas realizadas en Google sobre ciertos términos económicos o financieros que se correlacionan negativamente con la evolución de los mercados. En este artículo demostramos empíricamente a través de un modelo econométrico que las estadísticas de búsquedas en Google aportan información relevante sobre la evolución de los mercados financieros y que el IAR aporta señales de inversión con capacidad predictiva sobre la evolución de los principales índices bursátiles europeos, observándose rentabilidades esperadas negativas si el IAR se incrementa y positivas en caso contrario.

  20. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions

    International Nuclear Information System (INIS)

    Dragt, A.J.; Gluckstern, R.L.

    1992-11-01

    The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high frequency behavior of longitudinal and transverse coupling impedances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides

  1. Fast calculation method for computer-generated cylindrical holograms.

    Science.gov (United States)

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  2. Reconstruction method for fluorescent X-ray computed tomography by least-squares method using singular value decomposition

    Science.gov (United States)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-02-01

    We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.

  3. Mathematical modellings and computational methods for structural analysis of LMFBR's

    International Nuclear Information System (INIS)

    Liu, W.K.; Lam, D.

    1983-01-01

    In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)

  4. Automated uncertainty analysis methods in the FRAP computer codes

    International Nuclear Information System (INIS)

    Peck, S.O.

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts

  5. Robust fault detection of linear systems using a computationally efficient set-membership method

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....

  6. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  7. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    Science.gov (United States)

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  8. Using AMDD method for Database Design in Mobile Cloud Computing Systems

    OpenAIRE

    Silviu Claudiu POPA; Mihai-Constantin AVORNICULUI; Vasile Paul BRESFELEAN

    2013-01-01

    The development of the technologies of wireless telecommunications gave birth of new kinds of e-commerce, the so called Mobile e-Commerce or m-Commerce. Mobile Cloud Computing (MCC) represents a new IT research area that combines mobile computing and cloud compu-ting techniques. Behind a cloud mobile commerce system there is a database containing all necessary information for transactions. By means of Agile Model Driven Development (AMDD) method, we are able to achieve many benefits that smoo...

  9. Studiu teoretic al organelor de lucru de tip cuţit-disc ale maşinilor agricole

    Directory of Open Access Journals (Sweden)

    Iurie MELNIC

    2016-12-01

    Full Text Available Such conservation technologies as No-till, Mini-till, Strip-till etc., suppose the use of agricultural machines with rotary cutting units for such cultural practices as preparing the soil for planting, shredding and incorporation of crop residues into the soil and weed control. The paper presents a theoretical study of the operating process of the disc cultivator used for weed control (elaborated by the author with rotary cutting units (dynamic interaction of the rotary blade with the soil, the effect of friction force between the soil and the disc, the coordinates of soil particle relative to mobile coordinate system when rotation is done by an angle a and calculation elements of disc parameters. The main parameter of the developed device is disc diameter (it is chosen depending on the maximum working depth. As a result of the conducted study, we can affirm that the calculated disc diameter (for the working depth of h=0,16 m should be Dcalc=0,55 m. The studied block of rotary blades can be also used in the case of seedling planting machines, seed sowing machines and fertilizer spreading machines. Rezumat. Tehnologiile conservative No-Til, Mini-Till,Strip-Till etc. presupun folosirea maşinilor agricole cu organe de lucru de tip disc rotativ la efectuarea lucrărilor precum pregătirea solului pentru semănat, mărunţirea şi încorporarea în sol a resturilor vegetale şi combaterea buruienilor. În articol este prezentat un studiu teoretic al procesului de lucru al cultivatorului prăşitor (elaborat de autor cu organe de lucru de tip cuţit-disc circular (interacţiunea dinamică a cuţitului disc cu solul, acţiunea forţelor de frecare dintre sol şi disc, coordonatele particulei de sol în raport cu sistemul mobil de coordonate la rotirea sub un unghi a şi elemente de calcul ale parametrilor discului. În construcţia elaborată parametrul principal este diametrul discului (ales în funcţie de adâncimea maximă de lucru. În urma

  10. A new computational method for reactive power market clearing

    International Nuclear Information System (INIS)

    Zhang, T.; Elkasrawy, A.; Venkatesh, B.

    2009-01-01

    After deregulation of electricity markets, ancillary services such as reactive power supply are priced separately. However, unlike real power supply, procedures for costing and pricing reactive power supply are still evolving and spot markets for reactive power do not exist as of now. Further, traditional formulations proposed for clearing reactive power markets use a non-linear mixed integer programming formulation that are difficult to solve. This paper proposes a new reactive power supply market clearing scheme. Novelty of this formulation lies in the pricing scheme that rewards transformers for tap shifting while participating in this market. The proposed model is a non-linear mixed integer challenge. A significant portion of the manuscript is devoted towards the development of a new successive mixed integer linear programming (MILP) technique to solve this formulation. The successive MILP method is computationally robust and fast. The IEEE 6-bus and 300-bus systems are used to test the proposed method. These tests serve to demonstrate computational speed and rigor of the proposed method. (author)

  11. Oligomerization of G protein-coupled receptors: computational methods.

    Science.gov (United States)

    Selent, J; Kaczor, A A

    2011-01-01

    Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.

  12. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  13. Realization of the Evristic Combination Methods by Means of Computer Graphics

    Directory of Open Access Journals (Sweden)

    S. A. Novoselov

    2012-01-01

    Full Text Available The paper looks at the ways of enhancing and stimulating the creative activity and initiative of pedagogic students – the prospective specialists called for educating and upbringing socially and professionally competent, originally thinking, versatile personalities. For developing their creative abilities the author recommends introducing the heuristic combination methods, applied for engineering creativity facilitation; associative-synectic technology; and computer graphics tools. The paper contains the comparative analysis of the main heuristic method operations and the computer graphics redactor in creating a visual composition. The examples of implementing the heuristic combination methods are described along with the extracts of the laboratory classes designed for creativity and its motivation developments. The approbation of the given method in the several universities confirms the prospects of enhancing the students’ learning and creative activities. 

  14. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  15. Thermoelectricity analogy method for computing the periodic heat transfer in external building envelopes

    International Nuclear Information System (INIS)

    Peng Changhai; Wu Zhishen

    2008-01-01

    Simple and effective computation methods are needed to calculate energy efficiency in buildings for building thermal comfort and HVAC system simulations. This paper, which is based upon the theory of thermoelectricity analogy, develops a new harmonic method, the thermoelectricity analogy method (TEAM), to compute the periodic heat transfer in external building envelopes (EBE). It presents, in detail, the principles and specific techniques of TEAM to calculate both the decay rates and time lags of EBE. First, a set of linear equations is established using the theory of thermoelectricity analogy. Second, the temperature of each node is calculated by solving the linear equations set. Finally, decay rates and time lags are found by solving simple mathematical expressions. Comparisons show that this method is highly accurate and efficient. Moreover, relative to the existing harmonic methods, which are based on the classical control theory and the method of separation of variables, TEAM does not require complicated derivation and is amenable to hand computation and programming

  16. Moment-based method for computing the two-dimensional discrete Hartley transform

    Science.gov (United States)

    Dong, Zhifang; Wu, Jiasong; Shu, Huazhong

    2009-10-01

    In this paper, we present a fast algorithm for computing the two-dimensional (2-D) discrete Hartley transform (DHT). By using kernel transform and Taylor expansion, the 2-D DHT is approximated by a linear sum of 2-D geometric moments. This enables us to use the fast algorithms developed for computing the 2-D moments to efficiently calculate the 2-D DHT. The proposed method achieves a simple computational structure and is suitable to deal with any sequence lengths.

  17. Recent Development in Rigorous Computational Methods in Dynamical Systems

    OpenAIRE

    Arai, Zin; Kokubu, Hiroshi; Pilarczyk, Paweł

    2009-01-01

    We highlight selected results of recent development in the area of rigorous computations which use interval arithmetic to analyse dynamical systems. We describe general ideas and selected details of different ways of approach and we provide specific sample applications to illustrate the effectiveness of these methods. The emphasis is put on a topological approach, which combined with rigorous calculations provides a broad range of new methods that yield mathematically rel...

  18. A virtual component method in numerical computation of cascades for isotope separation

    International Nuclear Information System (INIS)

    Zeng Shi; Cheng Lu

    2014-01-01

    The analysis, optimization, design and operation of cascades for isotope separation involve computations of cascades. In analytical analysis of cascades, using virtual components is a very useful analysis method. For complicated cases of cascades, numerical analysis has to be employed. However, bound up to the conventional idea that the concentration of a virtual component should be vanishingly small, virtual component is not yet applied to numerical computations. Here a method of introducing the method of using virtual components to numerical computations is elucidated, and its application to a few types of cascades is explained and tested by means of numerical experiments. The results show that the concentration of a virtual component is not restrained at all by the 'vanishingly small' idea. For the same requirements on cascades, the cascades obtained do not depend on the concentrations of virtual components. (authors)

  19. Method for Statically Checking an Object-oriented Computer Program Module

    Science.gov (United States)

    Bierhoff, Kevin M. (Inventor); Aldrich, Jonathan (Inventor)

    2012-01-01

    A method for statically checking an object-oriented computer program module includes the step of identifying objects within a computer program module, at least one of the objects having a plurality of references thereto, possibly from multiple clients. A discipline of permissions is imposed on the objects identified within the computer program module. The permissions enable tracking, from among a discrete set of changeable states, a subset of states each object might be in. A determination is made regarding whether the imposed permissions are violated by a potential reference to any of the identified objects. The results of the determination are output to a user.

  20. An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Randal Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-22

    CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest and emerging HPC systems.

  1. An Accurate Method for Computing the Absorption of Solar Radiation by Water Vapor

    Science.gov (United States)

    Chou, M. D.

    1980-01-01

    The method is based upon molecular line parameters and makes use of a far wing scaling approximation and k distribution approach previously applied to the computation of the infrared cooling rate due to water vapor. Taking into account the wave number dependence of the incident solar flux, the solar heating rate is computed for the entire water vapor spectrum and for individual absorption bands. The accuracy of the method is tested against line by line calculations. The method introduces a maximum error of 0.06 C/day. The method has the additional advantage over previous methods in that it can be applied to any portion of the spectral region containing the water vapor bands. The integrated absorptances and line intensities computed from the molecular line parameters were compared with laboratory measurements. The comparison reveals that, among the three different sources, absorptance is the largest for the laboratory measurements.

  2. A substructure method to compute the 3D fluid-structure interaction during blowdown

    International Nuclear Information System (INIS)

    Guilbaud, D.; Axisa, F.; Gantenbein, F.; Gibert, R.J.

    1983-08-01

    The waves generated by a sudden rupture of a PWR primary pipe have an important mechanical effect on the internal structures of the vessel. This fluid-structure interaction has a strong 3D aspect. 3D finite element explicit methods can be applied. These methods take into account the non linearities of the problem but the calculation is heavy and expensive. We describe in this paper another type of method based on a substructure procedure: the vessel, internals and contained fluid are axisymmetrically described (AQUAMODE computer code). The pipes and contained fluid are monodimensionaly described (TEDEL-FLUIDE Computer Code). These substructures are characterized by their natural modes. Then, they are connected to another (connection of both structural and fluid nodes) the TRISTANA Computer Code. This method allows to compute correctly and cheaply the 3D fluid-structure effects. The treatment of certain non linearities is difficult because of the modal characterization of the substructures. However variations of contact conditions versus time can be introduced. We present here some validation tests and comparison with experimental results of the litterature

  3. Computation of mode eigenfunctions in graded-index optical fibers by the propagating beam method

    International Nuclear Information System (INIS)

    Feit, M.D.; Fleck, J.A. Jr.

    1980-01-01

    The propagating beam method utilizes discrete Fourier transforms for generating configuration-space solutions to optical waveguide problems without reference to modes. The propagating beam method can also give a complete description of the field in terms of modes by a Fourier analysis with respect to axial distance of the computed fields. Earlier work dealt with the accurate determination of mode propagation constants and group delays. In this paper the method is extended to the computation of mode eigenfunctions. The method is efficient, allowing generation of a large number of eigenfunctions from a single propagation run. Computations for parabolic-index profiles show excellent agreement between analytic and numerically generated eigenfunctions

  4. Computer-Based Job and Occupational Data Collection Methods: Feasibility Study

    National Research Council Canada - National Science Library

    Mitchell, Judith I

    1998-01-01

    .... The feasibility study was conducted to assess the operational and logistical problems involved with the development, implementation, and evaluation of computer-based job and occupational data collection methods...

  5. Computer-aided methods of determining thyristor thermal transients

    International Nuclear Information System (INIS)

    Lu, E.; Bronner, G.

    1988-08-01

    An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs

  6. Computational electromagnetic methods for transcranial magnetic stimulation

    Science.gov (United States)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3

  7. Choosing Learning Methods Suitable for Teaching and Learning in Computer Science

    Science.gov (United States)

    Taylor, Estelle; Breed, Marnus; Hauman, Ilette; Homann, Armando

    2013-01-01

    Our aim is to determine which teaching methods students in Computer Science and Information Systems prefer. There are in total 5 different paradigms (behaviorism, cognitivism, constructivism, design-based and humanism) with 32 models between them. Each model is unique and states different learning methods. Recommendations are made on methods that…

  8. Espadas y puñales del bronce final: el depósito de armas de Puertollano (Ciudad Real

    Directory of Open Access Journals (Sweden)

    Montero Ruiz, Ignacio

    2002-12-01

    Full Text Available This paper deals with the technological study of a new Late Bronce Age hoard found in Puertollano (Ciudad Real. This hoard is a singular found in the context of Iberian Peninsula due to the number of items (14 swords and daggers and 1 fragment of ferrule and all of them are weapons. Elemental analysis by PIXE shows a Copper-Tin alloy with very low impurity pattern. A general comment about the real use of this weapons is included.

    Se presenta el estudio tecnológico de un nuevo depósito del Bronce Final aparecido en Puertollano (Ciudad Real. Compuesto exclusivamente por armas (14 espadas y puñales y un fragmento de regatón constituye un deposito singular entre los conocidos en la Península ibérica. Se incluyen los análisis de composición mediante técnica PIXE que indican que todas las piezas son bronces binarios Cu-Sn con bajo nivel de impurezas. Se discute sobre la funcionalidad y uso práctico de estas producciones armamentísticas.

  9. Reliability of Lyapunov characteristic exponents computed by the two-particle method

    Science.gov (United States)

    Mei, Lijie; Huang, Li

    2018-03-01

    For highly complex problems, such as the post-Newtonian formulation of compact binaries, the two-particle method may be a better, or even the only, choice to compute the Lyapunov characteristic exponent (LCE). This method avoids the complex calculations of variational equations compared with the variational method. However, the two-particle method sometimes provides spurious estimates to LCEs. In this paper, we first analyze the equivalence in the definition of LCE between the variational and two-particle methods for Hamiltonian systems. Then, we develop a criterion to determine the reliability of LCEs computed by the two-particle method by considering the magnitude of the initial tangent (or separation) vector ξ0 (or δ0), renormalization time interval τ, machine precision ε, and global truncation error ɛT. The reliable Lyapunov characteristic indicators estimated by the two-particle method form a V-shaped region, which is restricted by d0, ε, and ɛT. Finally, the numerical experiments with the Hénon-Heiles system, the spinning compact binaries, and the post-Newtonian circular restricted three-body problem strongly support the theoretical results.

  10. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  11. Application of computational aerodynamics methods to the design and analysis of transport aircraft

    Science.gov (United States)

    Da Costa, A. L.

    1978-01-01

    The application and validation of several computational aerodynamic methods in the design and analysis of transport aircraft is established. An assessment is made concerning more recently developed methods that solve three-dimensional transonic flow and boundary layers on wings. Capabilities of subsonic aerodynamic methods are demonstrated by several design and analysis efforts. Among the examples cited are the B747 Space Shuttle Carrier Aircraft analysis, nacelle integration for transport aircraft, and winglet optimization. The accuracy and applicability of a new three-dimensional viscous transonic method is demonstrated by comparison of computed results to experimental data

  12. Análisis estocástico de señales vibratorias de motores de inducción para la detección de fallas usando descomposición de modo empírico

    Directory of Open Access Journals (Sweden)

    Alejandro Rivera Roldán

    2015-04-01

    Full Text Available En este artículo se presenta un análisis de vibraciones en motores de inducción por medio de Modelos Ocultos de Markov (Hidden Markov Model - HMM aplicado a características obtenidas de la Descomposición de Modo Empírico (Empirical Mode Decomposition - EMD y transformada de Hilbert-Huang de señales de vibración obtenidas en las coordenadas x y y, con el fin de detectar fallas de funcionamiento en rodamientos y barras.  Además se presenta un análisis comparativo de la capacidad de las señales de vibración en dirección x y en dirección y, para aportar información en la detección de fallas. Así, un HMM ergódico inicializado y entrenado por medio del algoritmo de máxima esperanza, con convergencia en 10e-7 y un máximo de iteraciones de 100, se aplicó sobre el espacio de características y su desempeño fue determinado mediante validación cruzada 80-20 con 30 fold, obteniendo un alto desempeño para la detección de fallas en términos de exactitud.

  13. Grid computing for LHC and methods for W boson mass measurement at CMS

    International Nuclear Information System (INIS)

    Jung, Christopher

    2007-01-01

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W → μν; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  14. Grid computing for LHC and methods for W boson mass measurement at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christopher

    2007-12-14

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W {yields} {mu}{nu}; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  15. Computer-Aided Design Method of Warp-Knitted Jacquard Spacer Fabrics

    Directory of Open Access Journals (Sweden)

    Li Xinxin

    2016-06-01

    Full Text Available Based on a further study on knitting and jacquard principles, this paper presents a mathematical design model to make computer-aided design of warp-knitted jacquard spacer fabrics more efficient. The mathematical model with matrix method employs three essential elements of chain notation, threading and Jacquard designing. With this model, the processing to design warp-knitted jacquard spacer fabrics with CAD software is also introduced. In this study, the sports shoes which have separated functional areas according to the feet structure and characteristics of movement are analysed. The results show the different patterns on Jacquard spacer fabrics that are seamlessly stitched with jacquard technics. The computer-aided design method of warp-knitted jacquard spacer fabrics is efficient and simple.

  16. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    Science.gov (United States)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  17. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  18. A new fault detection method for computer networks

    International Nuclear Information System (INIS)

    Lu, Lu; Xu, Zhengguo; Wang, Wenhai; Sun, Youxian

    2013-01-01

    Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method

  19. A comparison of methods for the assessment of postural load and duration of computer use

    NARCIS (Netherlands)

    Heinrich, J.; Blatter, B.M.; Bongers, P.M.

    2004-01-01

    Aim: To compare two different methods for assessment of postural load and duration of computer use in office workers. Methods: The study population existed of 87 computer workers. Questionnaire data about exposure were compared with exposures measured by a standardised or objective method. Measuring

  20. Review on pen-and-paper-based observational methods for assessing ergonomic risk factors of computer work.

    Science.gov (United States)

    Rahman, Mohd Nasrull Abdol; Mohamad, Siti Shafika

    2017-01-01

    Computer works are associated with Musculoskeletal Disorders (MSDs). There are several methods have been developed to assess computer work risk factor related to MSDs. This review aims to give an overview of current techniques available for pen-and-paper-based observational methods in assessing ergonomic risk factors of computer work. We searched an electronic database for materials from 1992 until 2015. The selected methods were focused on computer work, pen-and-paper observational methods, office risk factors and musculoskeletal disorders. This review was developed to assess the risk factors, reliability and validity of pen-and-paper observational method associated with computer work. Two evaluators independently carried out this review. Seven observational methods used to assess exposure to office risk factor for work-related musculoskeletal disorders were identified. The risk factors involved in current techniques of pen and paper based observational tools were postures, office components, force and repetition. From the seven methods, only five methods had been tested for reliability. They were proven to be reliable and were rated as moderate to good. For the validity testing, from seven methods only four methods were tested and the results are moderate. Many observational tools already exist, but no single tool appears to cover all of the risk factors including working posture, office component, force, repetition and office environment at office workstations and computer work. Although the most important factor in developing tool is proper validation of exposure assessment techniques, the existing observational method did not test reliability and validity. Futhermore, this review could provide the researchers with ways on how to improve the pen-and-paper-based observational method for assessing ergonomic risk factors of computer work.

  1. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    Science.gov (United States)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  2. A multiple-scaling method of the computation of threaded structures

    International Nuclear Information System (INIS)

    Andrieux, S.; Leger, A.

    1989-01-01

    The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented

  3. Computer-Based Methods for Collecting Peer Nomination Data: Utility, Practice, and Empirical Support.

    Science.gov (United States)

    van den Berg, Yvonne H M; Gommans, Rob

    2017-09-01

    New technologies have led to several major advances in psychological research over the past few decades. Peer nomination research is no exception. Thanks to these technological innovations, computerized data collection is becoming more common in peer nomination research. However, computer-based assessment is more than simply programming the questionnaire and asking respondents to fill it in on computers. In this chapter the advantages and challenges of computer-based assessments are discussed. In addition, a list of practical recommendations and considerations is provided to inform researchers on how computer-based methods can be applied to their own research. Although the focus is on the collection of peer nomination data in particular, many of the requirements, considerations, and implications are also relevant for those who consider the use of other sociometric assessment methods (e.g., paired comparisons, peer ratings, peer rankings) or computer-based assessments in general. © 2017 Wiley Periodicals, Inc.

  4. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  5. Establecimiento de una base de datos de señales de vibraciones acústicas e imágenes termográficas infrarrojas para un sistema mecánico rotativo con la combinación de diferentes tipos de fallos y elaboración de guías de prácticas para detección de fallos en engranajes

    OpenAIRE

    Guiracocha Guiracocha, Rómulo Andrés

    2015-01-01

    El proyecto genera bases de datos de señales de emisión acústica, señales de vibración mecánicas e imágenes termográficas sobre un sistema mecánico rotativo que servirán en el diagnóstico de fallos aplicado al monitoreo de la condición y se generan guías de práctica para el análisis de vibraciones mecánicas en caja de engranajes y evaluación térmica de rodamientos. This project generates databases of acoustic emission signals, mechanical vibration signals and thermal images on a rotating m...

  6. An analytical method for computing atomic contact areas in biomolecules.

    Science.gov (United States)

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  7. Laboratory Sequence in Computational Methods for Introductory Chemistry

    Science.gov (United States)

    Cody, Jason A.; Wiser, Dawn C.

    2003-07-01

    A four-exercise laboratory sequence for introductory chemistry integrating hands-on, student-centered experience with computer modeling has been designed and implemented. The progression builds from exploration of molecular shapes to intermolecular forces and the impact of those forces on chemical separations made with gas chromatography and distillation. The sequence ends with an exploration of molecular orbitals. The students use the computers as a tool; they build the molecules, submit the calculations, and interpret the results. Because of the construction of the sequence and its placement spanning the semester break, good laboratory notebook practices are reinforced and the continuity of course content and methods between semesters is emphasized. The inclusion of these techniques in the first year of chemistry has had a positive impact on student perceptions and student learning.

  8. Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking

    KAUST Repository

    Huang, Huang

    2017-07-16

    This thesis focuses on two topics, computational methods for large spatial datasets and functional data ranking. Both are tackling the challenges of big and high-dimensional data. The first topic is motivated by the prohibitive computational burden in fitting Gaussian process models to large and irregularly spaced spatial datasets. Various approximation methods have been introduced to reduce the computational cost, but many rely on unrealistic assumptions about the process and retaining statistical efficiency remains an issue. We propose a new scheme to approximate the maximum likelihood estimator and the kriging predictor when the exact computation is infeasible. The proposed method provides different types of hierarchical low-rank approximations that are both computationally and statistically efficient. We explore the improvement of the approximation theoretically and investigate the performance by simulations. For real applications, we analyze a soil moisture dataset with 2 million measurements with the hierarchical low-rank approximation and apply the proposed fast kriging to fill gaps for satellite images. The second topic is motivated by rank-based outlier detection methods for functional data. Compared to magnitude outliers, it is more challenging to detect shape outliers as they are often masked among samples. We develop a new notion of functional data depth by taking the integration of a univariate depth function. Having a form of the integrated depth, it shares many desirable features. Furthermore, the novel formation leads to a useful decomposition for detecting both shape and magnitude outliers. Our simulation studies show the proposed outlier detection procedure outperforms competitors in various outlier models. We also illustrate our methodology using real datasets of curves, images, and video frames. Finally, we introduce the functional data ranking technique to spatio-temporal statistics for visualizing and assessing covariance properties, such as

  9. Mathematical modelling in volume per hectare of Pinus caribaea Morelet var. caribaea Barret y Golfari at the «Jazmines» silvicultural unit, Viñales

    Directory of Open Access Journals (Sweden)

    Juana Teresa Suárez Sarria

    2013-12-01

    Full Text Available Mathematical modelling constitutes a very useful tool for the planning and administration of the forest ecosystems. With the objective of predicting the behavior of volume by hectare of Pinus caribaea Moreletvar. caribaea. Barret y Golfari plantations at the «Jazmines» Silvicultural Unit, Viñales, seven non-lineal regression models were evaluated. The best goodness of fit model was the volume per hectare was the one designed by Hossfeld I, with a determining coefficient of 63, 9 % with a high significance parameter (P <0.001. The description curves for the annual mean increment with the time (IMA and the annual periodical increment (ICA of this variables were provided.

  10. A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm

    Directory of Open Access Journals (Sweden)

    Mariana-Eugenia Ilas

    2018-03-01

    Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.

  11. Computations of finite temperature QCD with the pseudofermion method

    International Nuclear Information System (INIS)

    Fucito, F.; Solomon, S.

    1985-01-01

    The authors discuss the phase diagram of finite temperature QCD as it is obtained including the effects of dynamical quarks by the pseudofermion method. They compare their results with the results obtained by other groups and comment on the actual state of the art for these kind of computations

  12. Multiscale methods in turbulent combustion: strategies and computational challenges

    International Nuclear Information System (INIS)

    Echekki, Tarek

    2009-01-01

    A principal challenge in modeling turbulent combustion flows is associated with their complex, multiscale nature. Traditional paradigms in the modeling of these flows have attempted to address this nature through different strategies, including exploiting the separation of turbulence and combustion scales and a reduced description of the composition space. The resulting moment-based methods often yield reasonable predictions of flow and reactive scalars' statistics under certain conditions. However, these methods must constantly evolve to address combustion at different regimes, modes or with dominant chemistries. In recent years, alternative multiscale strategies have emerged, which although in part inspired by the traditional approaches, also draw upon basic tools from computational science, applied mathematics and the increasing availability of powerful computational resources. This review presents a general overview of different strategies adopted for multiscale solutions of turbulent combustion flows. Within these strategies, some specific models are discussed or outlined to illustrate their capabilities and underlying assumptions. These strategies may be classified under four different classes, including (i) closure models for atomistic processes, (ii) multigrid and multiresolution strategies, (iii) flame-embedding strategies and (iv) hybrid large-eddy simulation-low-dimensional strategies. A combination of these strategies and models can potentially represent a robust alternative strategy to moment-based models; but a significant challenge remains in the development of computational frameworks for these approaches as well as their underlying theories. (topical review)

  13. Low frequency signals and the strategy of customer retention Las señales de baja frecuencia y la estrategia de retención de clientes Sinais de baixa frequência e a estratégia de retenção de clientes

    Directory of Open Access Journals (Sweden)

    Jessé Alves Amâncio

    2010-03-01

    Full Text Available Research results are reported about monitoring various internal company variables, such as Customer Service and Quality Control as related to the strategy for retaining customers. The purpose was to evaluate the ability to identify situations where customers leave during turbulent business environments according to the concepts of low and high frequency signals proposed by Ansoff (1993. Theses signals show, more or less explicitly, situations that require company action to retain customers. Inherently companies are not prepared to detect the low frequency signals, the more important during uncertainties. The case study method investigated a Brazilian freight company with quantitative and qualitative methods. Low frequency signals were not suitably detected while high frequency signals were not sufficient to identify critical customer retention situations. The conclusion was that in turbulent times, greater effort must be applied to monitor low frequency signals in order to implement an effective customer retention strategy.Este trabajo presenta los resultados de una investigación sobre la relación entre el seguimiento de variables internas a la empresa, tales como los registros de los Servicios de Atención al Cliente y de Control de Calidad, y la estrategia de retención de clientes. El objetivo fue evaluar la capacidad de identificación de situaciones de pérdida de clientes en ambientes de negocio turbulentos por medio del concepto de señales de alta y baja frecuencia propuesto por Ansoff (1993. Esas señales indican, de forma más explícita o más implícita, situaciones que necesitan la actuación de la empresa para la retención de los clientes. Se defiende que las empresas no están preparadas para detectar las señales de baja frecuencia, que son las más importantes en ambientes de incertidumbre. La metodología utilizada fue el estudio de caso, realizado en una empresa de servicios de transporte

  14. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-09-19

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  15. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-01-01

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  16. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  17. Structural dynamics in LMFBR containment analysis: a brief survey of computational methods and codes

    International Nuclear Information System (INIS)

    Chang, Y.W.; Gvildys, J.

    1977-01-01

    In recent years, the use of computer codes to study the response of primary containment of large, liquid-metal fast breeder reactors (LMFBR) under postulated accident conditions has been adopted by most fast reactor projects. Since the first introduction of REXCO-H containment code in 1969, a number of containment codes have evolved and been reported in the literature. The paper briefly summarizes the various numerical methods commonly used in containment analysis in computer programs. They are compared on the basis of truncation errors resulting in the numerical approximation, the method of integration, the resolution of the computed results, and the ease of programming in computer codes. The aim of the paper is to provide enough information to an analyst so that he can suitably define his choice of method, and hence his choice of programs

  18. An efficient and general numerical method to compute steady uniform vortices

    Science.gov (United States)

    Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.

    2011-07-01

    Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.

  19. Atomic layer deposition and etching methods for far ultraviolet aluminum mirrors

    Science.gov (United States)

    Hennessy, John; Moore, Christopher S.; Balasubramanian, Kunjithapatham; Jewell, April D.; Carter, Christian; France, Kevin; Nikzad, Shouleh

    2017-09-01

    High-performance aluminum mirrors at far ultraviolet wavelengths require transparent dielectric materials as protective coatings to prevent oxidation. Reducing the thickness of this protective layer can result in additional performance gains by minimizing absorption losses, and provides a path toward high Al reflectance in the challenging wavelength range of 90 to 110 nm. We have pursued the development of new atomic layer deposition processes (ALD) for the metal fluoride materials of MgF2, AlF3 and LiF. Using anhydrous hydrogen fluoride as a reactant, these films can be deposited at the low temperatures required for large-area surface-finished optics and polymeric diffraction gratings. We also report on the development and application of an atomic layer etching (ALE) procedure to controllably etch native aluminum oxide. Our ALE process utilizes the same chemistry used in the ALD of AlF3 thin films, allowing for a combination of high-performance evaporated Al layers and ultrathin ALD encapsulation without requiring vacuum transfer. Progress in demonstrating the scalability of this approach, as well as the environmental stability of ALD/ALE Al mirrors are discussed in the context of possible future applications for NASA LUVOIR and HabEx mission concepts.

  20. Two-phase flow steam generator simulations on parallel computers using domain decomposition method

    International Nuclear Information System (INIS)

    Belliard, M.

    2003-01-01

    Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)

  1. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    Science.gov (United States)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  2. Magnetic field computations of the magnetic circuits with permanent magnets by infinite element method

    International Nuclear Information System (INIS)

    Hahn, Song Yop

    1985-01-01

    A method employing infinite elements is described for the magnetic field computations of the magnetic circuits with permanent magnet. The system stiffness matrix is derived by a variational approach, while the interfacial boundary conditions between the finite element regions and the infinite element regions are dealt with using collocation method. The proposed method is applied to a simple linear problems, and the numerical results are compared with those of the standard finite element method and the analytic solutions. It is observed that the proposed method gives more accurate results than those of the standard finite element method under the same computing efforts. (Author)

  3. Advanced display object selection methods for enhancing user-computer productivity

    Science.gov (United States)

    Osga, Glenn A.

    1993-01-01

    The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.

  4. Computer-aided head film analysis: the University of California San Francisco method.

    Science.gov (United States)

    Baumrind, S; Miller, D M

    1980-07-01

    Computer technology is already assuming an important role in the management of orthodontic practices. The next 10 years are likely to see expansion in computer usage into the areas of diagnosis, treatment planning, and treatment-record keeping. In the areas of diagnosis and treatment planning, one of the first problems to be attacked will be the automation of head film analysis. The problems of constructing computer-aided systems for this purpose are considered herein in the light of the authors' 10 years of experience in developing a similar system for research purposes. The need for building in methods for automatic detection and correction of gross errors is discussed and the authors' method for doing so is presented. The construction of a rudimentary machine-readable data base for research and clinical purposes is described.

  5. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    Science.gov (United States)

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  6. Assessing different parameters estimation methods of Weibull distribution to compute wind power density

    International Nuclear Information System (INIS)

    Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi

    2016-01-01

    Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.

  7. IV international conference on computational methods in marine engineering : selected papers

    CERN Document Server

    Oñate, Eugenio; García-Espinosa, Julio; Kvamsdal, Trond; Bergan, Pål; MARINE 2011

    2013-01-01

    This book contains selected papers from the Fourth International Conference on Computational Methods in Marine Engineering, held at Instituto Superior Técnico, Technical University of Lisbon, Portugal in September 2011.  Nowadays, computational methods are an essential tool of engineering, which includes a major field of interest in marine applications, such as the maritime and offshore industries and engineering challenges related to the marine environment and renewable energies. The 2011 Conference included 8 invited plenary lectures and 86 presentations distributed through 10 thematic sessions that covered many of the most relevant topics of marine engineering today. This book contains 16 selected papers from the Conference that cover “CFD for Offshore Applications”, “Fluid-Structure Interaction”, “Isogeometric Methods for Marine Engineering”, “Marine/Offshore Renewable Energy”, “Maneuvering and Seakeeping”, “Propulsion and Cavitation” and “Ship Hydrodynamics”.  The papers we...

  8. AI/OR computational model for integrating qualitative and quantitative design methods

    Science.gov (United States)

    Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor

    1990-01-01

    A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.

  9. Distributed-Lagrange-Multiplier-based computational method for particulate flow with collisions

    Science.gov (United States)

    Ardekani, Arezoo; Rangel, Roger

    2006-11-01

    A Distributed-Lagrange-Multiplier-based computational method is developed for colliding particles in a solid-fluid system. A numerical simulation is conducted in two dimensions using the finite volume method. The entire domain is treated as a fluid but the fluid in the particle domains satisfies a rigidity constraint. We present an efficient method for predicting the collision between particles. In earlier methods, a repulsive force was applied to the particles when their distance was less than a critical value. In this method, an impulsive force is computed. During the frictionless collision process between two particles, linear momentum is conserved while the tangential forces are zero. Thus, instead of satisfying a condition of rigid body motion for each particle separately, as done when particles are not in contact, both particles are rigidified together along their line of centers. Particles separate from each other when the impulsive force is less than zero and after this time, a rigidity constraint is satisfied for each particle separately. Grid independency is implemented to ensure the accuracy of the numerical simulation. A comparison between this method and previous collision strategies is presented and discussed.

  10. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  11. Methods for computing water-quality loads at sites in the U.S. Geological Survey National Water Quality Network

    Science.gov (United States)

    Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.

    2017-10-24

    The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.

  12. Empirical method for simulation of water tables by digital computers

    International Nuclear Information System (INIS)

    Carnahan, C.L.; Fenske, P.R.

    1975-09-01

    An empirical method is described for computing a matrix of water-table elevations from a matrix of topographic elevations and a set of observed water-elevation control points which may be distributed randomly over the area of interest. The method is applicable to regions, such as the Great Basin, where the water table can be assumed to conform to a subdued image of overlying topography. A first approximation to the water table is computed by smoothing a matrix of topographic elevations and adjusting each node of the smoothed matrix according to a linear regression between observed water elevations and smoothed topographic elevations. Each observed control point is assumed to exert a radially decreasing influence on the first approximation surface. The first approximation is then adjusted further to conform to observed water-table elevations near control points. Outside the domain of control, the first approximation is assumed to represent the most probable configuration of the water table. The method has been applied to the Nevada Test Site and the Hot Creek Valley areas in Nevada

  13. Effectivness of different teaching methods on ergonomics for 12-16 years old children working with computer

    OpenAIRE

    Jasionytė, Monika

    2016-01-01

    Effectivness of Different Teaching Methods on Ergonomics for 12-16 Years Old Children Working with Computer. Work author: Monika Jasionytė Work advisor: assistant Inga Raudonytė, Vilnius University faculty of Medicine Department of Rehabilitation, Physical and Sports Medicine. Main concept: ergonomics, children, methods. Work goal: figure out which teaching method is moust efficiant for 12-16 years old children, work with computer ergonomics Goals: 1. Figure out computer working place ergonom...

  14. Сlassification of methods of production of computer forensic by usage approach of graph theory

    OpenAIRE

    Anna Ravilyevna Smolina; Alexander Alexandrovich Shelupanov

    2016-01-01

    Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.

  15. Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2011-10-11

    Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

  16. Сlassification of methods of production of computer forensic by usage approach of graph theory

    Directory of Open Access Journals (Sweden)

    Anna Ravilyevna Smolina

    2016-06-01

    Full Text Available Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.

  17. Reducción de interferencia de línea de potencia en señales electrocardiográficas mediante el filtro dual de Kalman

    Directory of Open Access Journals (Sweden)

    Luis David Avendaño Valencia

    2007-09-01

    Full Text Available En este artículo se presenta el desarrollo de un filtro para la reducción de la interferencia de línea de potencia en señales electrocardiográficas (ECG, basado en estimación dual de parámetros y de estado, empleando la filtración Kalman, en el cual se consideran modelos independientes entre la interferencia de línea de potencia y la señal ECG. Ambos modelos son combinados para simular la señal ECG medida sobre la que se realiza la estimación de estado para separar la señal de la interferencia. El algoritmo propuesto es sintonizado y comparado en un conjunto de pruebas realizadas sobre la base de datos QT de electrocardiografía. Inicialmente se hacen pruebas de sintonización del algoritmo para el rastreo de la señal ECG limpia, cuyos resultados son utilizados después para las pruebas de filtrado. Luego se llevan a cabo pruebas exhaustivas sobre la base de datos QT en la filtración de interferencia de línea de potencia, la cual ha sido introducida artificialmente en los registros, para una relación de señal a ruido (SNR dada, obteniendo así curvas del desempeño del algoritmo, que permiten a su vez comparar con el desempeño de otros algoritmos de filtración, a saber, un filtro notch recursivo de respuesta infinita al impulso (IIR y un filtro de Kalman, basado en un modelo más simple para la señal ECG. Como resultado, se demuestra que el algoritmo de filtrado obtenido es robusto a los cambios de amplitud de la interferencia; además, conserva sus propiedades para los diferentes tipos de morfologías de señales ECG normales y patológicas.

  18. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    Science.gov (United States)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  19. A cognition-based method to ease the computational load for an extended Kalman filter.

    Science.gov (United States)

    Li, Yanpeng; Li, Xiang; Deng, Bin; Wang, Hongqiang; Qin, Yuliang

    2014-12-03

    The extended Kalman filter (EKF) is the nonlinear model of a Kalman filter (KF). It is a useful parameter estimation method when the observation model and/or the state transition model is not a linear function. However, the computational requirements in EKF are a difficulty for the system. With the help of cognition-based designation and the Taylor expansion method, a novel algorithm is proposed to ease the computational load for EKF in azimuth predicting and localizing under a nonlinear observation model. When there are nonlinear functions and inverse calculations for matrices, this method makes use of the major components (according to current performance and the performance requirements) in the Taylor expansion. As a result, the computational load is greatly lowered and the performance is ensured. Simulation results show that the proposed measure will deliver filtering output with a similar precision compared to the regular EKF. At the same time, the computational load is substantially lowered.

  20. Anoplastie périnéale simple pour le traitement des malformations anorectales basses chez l'adulte, à propos de deux cas

    Science.gov (United States)

    Echchaoui, Abdelmoughit; Benyachou, Malika; Hafidi, Jawad; Fathi, Nahed; Mohammadine, Elhamid; ELmazouz, Samir; Gharib, Nour-eddine; Abbassi, Abdellah

    2014-01-01

    Les malformations anorectales chez l'adulte sont des anomalies congénitales rares du tube digestif qui prédominent chez le sexe féminin. Notre étude porte sur deux observations de malformation anorectale basses vues et traitées au stade adulte par les 2 équipes (plasticiens et viscéralistes) à l'Hôpital Avicenne à Rabat. Il s'agit d'un homme de 24 ans avec une dyschésie anale l'autre cas est une femme de 18 ans avec une malformation anovulvaire Les caractéristiques cliniques combinées avec les imageries radiologiques (lavement baryté, et la manométrie anorectale) ont confirmé qu'il s'agit d'une malfomation anorectale basse. Les deux cas sont corrigés par une reconstruction sphinctérienne, réimplantation anale avec anoplastie périnéale. Les suites opératoires étaient simples, pas de souffrance cutanée ou nécrose, avec changement de pansement gras chaque jour. Le résultat fonctionnel (la continence) était favorable pour les 2 patients. La présentation des MAR à l’âge adulte est rare, d’étiologie mal connu, elles apparaissent selon le mode sporadique. Les caractéristiques cliniques, couplées à l'imagerie (lavement baryté, IRM pelvienne), l'endoscopie et la manométrie anorectale, permettent de confirmer le diagnostic et classer ces anomalies en 3 types: basses, intermédiaires, et hautes. Les formes basses sont traités d'emblée par une réimplantation anale et anoplastie périnéale simple tels nos deux cas, elles peuvent être traités dans certains cas par un abaissement anorectale associé à une plastie V-Y permettant ainsi un emplacement anatomique correct de l'anus; alors que les formes hautes ou intermédiaires relèvent d'une chirurgie complexe avec souvent une dérivation digestive transitoire. Contrairement aux autres formes, Les formes basses ont un pronostic fonctionnel favorable. PMID:25667689

  1. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    Science.gov (United States)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  2. NATO Advanced Study Institute on Methods in Computational Molecular Physics

    CERN Document Server

    Diercksen, Geerd

    1992-01-01

    This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...

  3. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  4. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  5. Performance optimization of grooved slippers for aero hydraulic pumps

    Directory of Open Access Journals (Sweden)

    Juan Chen

    2016-06-01

    Full Text Available A computational fluid dynamics (CFD simulation method based on 3-D Navier–Stokes equation and Arbitrary Lagrangian–Eulerian (ALE method is presented to analyze the grooved slipper performance of piston pump. The moving domain of grooved slipper is transformed into a fixed reference domain by the ALE method, which makes it convenient to take the effects of rotate speed, body force, temperature, and oil viscosity into account. A geometric model to express the complex structure, which covers the orifice of piston and slipper, vented groove and the oil film, is constructed. Corresponding to different oil film thicknesses calculated in light of hydrostatic equilibrium theory and boundary conditions, a set of simulations is conducted in COMSOL to analyze the pump characteristics and effects of geometry (groove width and radius, orifice size on these characteristics. Furthermore, the mechanics and hydraulics analyses are employed to validate the CFD model, and there is an excellent agreement between simulation and analytical results. The simulation results show that the sealing land radius, orifice size and groove width all dramatically affect the slipper behavior, and an optimum tradeoff among these factors is conducive to optimizing the pump design.

  6. Evaluation of user input methods for manipulating a tablet personal computer in sterile techniques.

    Science.gov (United States)

    Yamada, Akira; Komatsu, Daisuke; Suzuki, Takeshi; Kurozumi, Masahiro; Fujinaga, Yasunari; Ueda, Kazuhiko; Kadoya, Masumi

    2017-02-01

    To determine a quick and accurate user input method for manipulating tablet personal computers (PCs) in sterile techniques. We evaluated three different manipulation methods, (1) Computer mouse and sterile system drape, (2) Fingers and sterile system drape, and (3) Digitizer stylus and sterile ultrasound probe cover with a pinhole, in terms of the central processing unit (CPU) performance, manipulation performance, and contactlessness. A significant decrease in CPU score ([Formula: see text]) and an increase in CPU temperature ([Formula: see text]) were observed when a system drape was used. The respective mean times taken to select a target image from an image series (ST) and the mean times for measuring points on an image (MT) were [Formula: see text] and [Formula: see text] s for the computer mouse method, [Formula: see text] and [Formula: see text] s for the finger method, and [Formula: see text] and [Formula: see text] s for the digitizer stylus method, respectively. The ST for the finger method was significantly longer than for the digitizer stylus method ([Formula: see text]). The MT for the computer mouse method was significantly longer than for the digitizer stylus method ([Formula: see text]). The mean success rate for measuring points on an image was significantly lower for the finger method when the diameter of the target was equal to or smaller than 8 mm than for the other methods. No significant difference in the adenosine triphosphate amount at the surface of the tablet PC was observed before, during, or after manipulation via the digitizer stylus method while wearing starch-powdered sterile gloves ([Formula: see text]). Quick and accurate manipulation of tablet PCs in sterile techniques without CPU load is feasible using a digitizer stylus and sterile ultrasound probe cover with a pinhole.

  7. FACTORI ŞI PARTICULARITĂŢI PSIHOSOCIALE ALE ADAPTĂRII COPIILOR CU CERINŢE EDUCAŢIONALE SPECIALE

    Directory of Open Access Journals (Sweden)

    Segiu TOMA

    2016-12-01

    Full Text Available În articol este abordată problema privind factorii şi particularităţile psihosociale ale adaptării copiilor cu CES în instituţii de învăţământ. Accentul se pune pe analiza factorilor psihologici, sociali, fiind evidenţiată „inteligenţa” ca un mecanism reglator. Alt factor ţine de motivaţia extrinsecă şi cea intrinsecă. Pe larg se analizează nivelul de aspiraţie, afectivitatea, capacitatea de autoreglare, imaginea de sine ca factori determinanţi ai adaptării copiilor cu CES în institu­ţiile de învăţământ.PSYCHOSOCIAL FACTORS AND PECULIARITIES OF ADAPTATION OF CHILDREN WITH SPECIAL EDUCATIONAL REQUIREMENTSThe article does not give address psychosocial factors and peculiarities of adaptation of children with SEN in schools. The focus is on analyzing the psychological, social factors, highlighting the "intelligence" as a regulatory mechanism. Another factor related to the intrinsic and extrinsic motivation extensively analyzed the level of aspiration, affect the ability of self-regulation, self-image, as drivers of adaptation of SEN children in educational institutions.          

  8. Photo-irradiation effects on GaAs atomic layer epitaxial growth. GaAs no genshiso epitaxial seicho ni okeru hikari reiki koka

    Energy Technology Data Exchange (ETDEWEB)

    Mashita, M.; Kawakyu, Y.; Sasaki, M.; Ishikawa, H. (Toshiba Corp., Kawasaki (Japan). Research and Development Center)

    1990-08-10

    Single atomic layer epitaxy (ALE) aims at controlling a growing film at a precision of single molecular layer. In this article, it is reported that the growth temperature range of ALE was expanded by the vertical irradiation of KrF exima laser (248 nm) onto the substrate for the ALE growth of GaAs using the metalorganic chemical vapor deposition (MOCVD) method. Thanks for the results of the above experiment, it was demonstrated that the irradiation effect was not thermal, but photochemical. In addition, this article studies the possibility of adsorption layer irradiation and surface irradiation as the photo-irradiation mechanism, and points out that coexistence of both irradiation mechanisms can be considered and, in case of exima laser, strong possibility of direct irradiation of the adsorption layer because of its high power density. Hereinafter, by using both optical growth ALE and thermal growth ALE jointly, the degree of freedom of combination of hetero ALE increases and its application to various material systems becomes possible. 16 refs., 6 figs.

  9. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  10. Use of digital computers for correction of gamma method and neutron-gamma method indications

    International Nuclear Information System (INIS)

    Lakhnyuk, V.M.

    1978-01-01

    The program for the NAIRI-S computer is described which is intended for accounting and elimination of the effect of by-processes when interpreting gamma and neutron-gamma logging indications. By means of slight corrections it is possible to use the program as a mathematical basis for logging diagram standardization by the method of multidimensional regressive analysis and estimation of rock reservoir properties

  11. Simplified computational methods for elastic and elastic-plastic fracture problems

    Science.gov (United States)

    Atluri, Satya N.

    1992-01-01

    An overview is given of some of the recent (1984-1991) developments in computational/analytical methods in the mechanics of fractures. Topics covered include analytical solutions for elliptical or circular cracks embedded in isotropic or transversely isotropic solids, with crack faces being subjected to arbitrary tractions; finite element or boundary element alternating methods for two or three dimensional crack problems; a 'direct stiffness' method for stiffened panels with flexible fasteners and with multiple cracks; multiple site damage near a row of fastener holes; an analysis of cracks with bonded repair patches; methods for the generation of weight functions for two and three dimensional crack problems; and domain-integral methods for elastic-plastic or inelastic crack mechanics.

  12. A computational method for sharp interface advection

    Science.gov (United States)

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  13. A computational method for sharp interface advection.

    Science.gov (United States)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  14. Application of computational methods in genetic study of inflammatory bowel disease.

    Science.gov (United States)

    Li, Jin; Wei, Zhi; Hakonarson, Hakon

    2016-01-21

    Genetic factors play an important role in the etiology of inflammatory bowel disease (IBD). The launch of genome-wide association study (GWAS) represents a landmark in the genetic study of human complex disease. Concurrently, computational methods have undergone rapid development during the past a few years, which led to the identification of numerous disease susceptibility loci. IBD is one of the successful examples of GWAS and related analyses. A total of 163 genetic loci and multiple signaling pathways have been identified to be associated with IBD. Pleiotropic effects were found for many of these loci; and risk prediction models were built based on a broad spectrum of genetic variants. Important gene-gene, gene-environment interactions and key contributions of gut microbiome are being discovered. Here we will review the different types of analyses that have been applied to IBD genetic study, discuss the computational methods for each type of analysis, and summarize the discoveries made in IBD research with the application of these methods.

  15. Evolution of Escherichia coli to 42 °C and Subsequent Genetic Engineering Reveals Adaptive Mechanisms and Novel Mutations

    DEFF Research Database (Denmark)

    Sandberg, Troy E.; Pedersen, Margit; LaCroix, Ryan A.

    2014-01-01

    Adaptive laboratory evolution (ALE) has emerged as a valuable method by which to investigate microbial adaptation to a desired environment. Here, we performed ALE to 42 °C of ten parallel populations of Escherichia coli K-12 MG1655 grown in glucose minimal media. Tightly controlled experimental c...... targets for additional ameliorating mutations. Overall, the results of this study provide insight into the adaptation process and yield lessons important for the future implementation of ALE as a tool for scientific research and engineering....

  16. Parallel computation of multigroup reactivity coefficient using iterative method

    Science.gov (United States)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  17. Application of the Ssub(n)-method for reactors computations on BESM-6 computer by using 26-group constants in the sub-group presentation

    International Nuclear Information System (INIS)

    Rogov, A.D.

    1975-01-01

    Description of the computer program for reactor computation by application of the Ssub(n)-method in the two-dimensional XY and RZ geometries is given. These programs are used with application of the computer library of 26- group constats system taking into account the resonance structure of the cross sections in the subgroup presentation. Results of some systems computations are given and the results obtained are analysed. (author)

  18. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    Science.gov (United States)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  19. Computational Quantum Mechanics for Materials Engineers The EMTO Method and Applications

    CERN Document Server

    Vitos, L

    2007-01-01

    Traditionally, new materials have been developed by empirically correlating their chemical composition, and the manufacturing processes used to form them, with their properties. Until recently, metallurgists have not used quantum theory for practical purposes. However, the development of modern density functional methods means that today, computational quantum mechanics can help engineers to identify and develop novel materials. Computational Quantum Mechanics for Materials Engineers describes new approaches to the modelling of disordered alloys that combine the most efficient quantum-level th

  20. An Efficient Hierarchical Multiscale Finite Element Method for Stokes Equations in Slowly Varying Media

    KAUST Repository

    Brown, Donald L.

    2013-01-01

    Direct numerical simulation (DNS) of fluid flow in porous media with many scales is often not feasible, and an effective or homogenized description is more desirable. To construct the homogenized equations, effective properties must be computed. Computation of effective properties for nonperiodic microstructures can be prohibitively expensive, as many local cell problems must be solved for different macroscopic points. In addition, the local problems may also be computationally expensive. When the microstructure varies slowly, we develop an efficient numerical method for two scales that achieves essentially the same accuracy as that for the full resolution solve of every local cell problem. In this method, we build a dense hierarchy of macroscopic grid points and a corresponding nested sequence of approximation spaces. Essentially, solutions computed in high accuracy approximation spaces at select points in the the hierarchy are used as corrections for the error of the lower accuracy approximation spaces at nearby macroscopic points. We give a brief overview of slowly varying media and formal Stokes homogenization in such domains. We present a general outline of the algorithm and list reasonable and easily verifiable assumptions on the PDEs, geometry, and approximation spaces. With these assumptions, we achieve the same accuracy as the full solve. To demonstrate the elements of the proof of the error estimate, we use a hierarchy of macro-grid points in [0, 1]2 and finite element (FE) approximation spaces in [0, 1]2. We apply this algorithm to Stokes equations in a slowly porous medium where the microstructure is obtained from a reference periodic domain by a known smooth map. Using the arbitrary Lagrange-Eulerian (ALE) formulation of the Stokes equations (cf. [G. P. Galdi and R. Rannacher, Fundamental Trends in Fluid-Structure Interaction, Contemporary Challenges in Mathematical Fluid Dynamics and Its Applications 1, World Scientific, Singapore, 2010]), we obtain

  1. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  2. Talbot's method for the numerical inversion of Laplace transforms: an implementation for personal computers

    International Nuclear Information System (INIS)

    Garratt, T.J.

    1989-05-01

    Safety assessments of radioactive waste disposal require efficient computer models for the important processes. The present paper is based on an efficient computational technique which can be used to solve a wide variety of safety assessment models. It involves the numerical inversion of analytical solutions to the Laplace-transformed differential equations using a method proposed by Talbot. This method has been implemented on a personal computer in a user-friendly manner. The steps required to implement a particular transform and run the program are outlined. Four examples are described which illustrate the flexibility, accuracy and efficiency of the program. The improvements in computational efficiency described in this paper have application to the probabilistic safety assessment codes ESCORT and MASCOT which are currently under development. Also, it is hoped that the present work will form the basis of software for personal computers which could be used to demonstrate safety assessment procedures to a wide audience. (author)

  3. Splitting method for computing coupled hydrodynamic and structural response

    International Nuclear Information System (INIS)

    Ash, J.E.

    1977-01-01

    A numerical method is developed for application to unsteady fluid dynamics problems, in particular to the mechanics following a sudden release of high energy. Solution of the initial compressible flow phase provides input to a power-series method for the incompressible fluid motions. The system is split into spatial and time domains leading to the convergent computation of a sequence of elliptic equations. Two sample problems are solved, the first involving an underwater explosion and the second the response of a nuclear reactor containment shell structure to a hypothetical core accident. The solutions are correlated with experimental data

  4. Computational methods for planning and evaluating geothermal energy projects

    International Nuclear Information System (INIS)

    Goumas, M.G.; Lygerou, V.A.; Papayannakis, L.E.

    1999-01-01

    In planning, designing and evaluating a geothermal energy project, a number of technical, economic, social and environmental parameters should be considered. The use of computational methods provides a rigorous analysis improving the decision-making process. This article demonstrates the application of decision-making methods developed in operational research for the optimum exploitation of geothermal resources. Two characteristic problems are considered: (1) the economic evaluation of a geothermal energy project under uncertain conditions using a stochastic analysis approach and (2) the evaluation of alternative exploitation schemes for optimum development of a low enthalpy geothermal field using a multicriteria decision-making procedure. (Author)

  5. Slepian modeling as a computational method in random vibration analysis of hysteretic structures

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob

    1999-01-01

    white noise. The computation time for obtaining estimates of relevant statistics on a given accuracy level is decreased by factors of one ormore orders of size as compared to the computation time needed for direct elasto-plastic displacementresponse simulations by vectorial Markov sequence techniques....... Moreover the Slepian method gives valuablephysical insight about the details of the plastic displacement development by time.The paper gives a general self-contained mathematical description of the Slepian method based plasticdisplacement analysis of Gaussian white noise excited EPOs. Experiences...

  6. Overdetermined shooting methods for computing standing water waves with spectral accuracy

    International Nuclear Information System (INIS)

    Wilkening, Jon; Yu Jia

    2012-01-01

    A high-performance shooting algorithm is developed to compute time-periodic solutions of the free-surface Euler equations with spectral accuracy in double and quadruple precision. The method is used to study resonance and its effect on standing water waves. We identify new nucleation mechanisms in which isolated large-amplitude solutions, and closed loops of such solutions, suddenly exist for depths below a critical threshold. We also study degenerate and secondary bifurcations related to Wilton's ripples in the traveling case, and explore the breakdown of self-similarity at the crests of extreme standing waves. In shallow water, we find that standing waves take the form of counter-propagating solitary waves that repeatedly collide quasi-elastically. In deep water with surface tension, we find that standing waves resemble counter-propagating depression waves. We also discuss the existence and non-uniqueness of solutions, and smooth versus erratic dependence of Fourier modes on wave amplitude and fluid depth. In the numerical method, robustness is achieved by posing the problem as an overdetermined nonlinear system and using either adjoint-based minimization techniques or a quadratically convergent trust-region method to minimize the objective function. Efficiency is achieved in the trust-region approach by parallelizing the Jacobian computation, so the setup cost of computing the Dirichlet-to-Neumann operator in the variational equation is not repeated for each column. Updates of the Jacobian are also delayed until the previous Jacobian ceases to be useful. Accuracy is maintained using spectral collocation with optional mesh refinement in space, a high-order Runge–Kutta or spectral deferred correction method in time and quadruple precision for improved navigation of delicate regions of parameter space as well as validation of double-precision results. Implementation issues for transferring much of the computation to a graphic processing units are briefly

  7. Advanced Computational Methods in Bio-Mechanics.

    Science.gov (United States)

    Al Qahtani, Waleed M S; El-Anwar, Mohamed I

    2018-04-15

    A novel partnership between surgeons and machines, made possible by advances in computing and engineering technology, could overcome many of the limitations of traditional surgery. By extending surgeons' ability to plan and carry out surgical interventions more accurately and with fewer traumas, computer-integrated surgery (CIS) systems could help to improve clinical outcomes and the efficiency of healthcare delivery. CIS systems could have a similar impact on surgery to that long since realised in computer-integrated manufacturing. Mathematical modelling and computer simulation have proved tremendously successful in engineering. Computational mechanics has enabled technological developments in virtually every area of our lives. One of the greatest challenges for mechanists is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. Biomechanics has significant potential for applications in orthopaedic industry, and the performance arts since skills needed for these activities are visibly related to the human musculoskeletal and nervous systems. Although biomechanics is widely used nowadays in the orthopaedic industry to design orthopaedic implants for human joints, dental parts, external fixations and other medical purposes, numerous researches funded by billions of dollars are still running to build a new future for sports and human healthcare in what is called biomechanics era.

  8. Computing eigenvalue sensitivity coefficients to nuclear data based on the CLUTCH method with RMC code

    International Nuclear Information System (INIS)

    Qiu, Yishu; She, Ding; Tang, Xiao; Wang, Kan; Liang, Jingang

    2016-01-01

    Highlights: • A new algorithm is proposed to reduce memory consumption for sensitivity analysis. • The fission matrix method is used to generate adjoint fission source distributions. • Sensitivity analysis is performed on a detailed 3D full-core benchmark with RMC. - Abstract: Recently, there is a need to develop advanced methods of computing eigenvalue sensitivity coefficients to nuclear data in the continuous-energy Monte Carlo codes. One of these methods is the iterated fission probability (IFP) method, which is adopted by most of Monte Carlo codes of having the capabilities of computing sensitivity coefficients, including the Reactor Monte Carlo code RMC. Though it is accurate theoretically, the IFP method faces the challenge of huge memory consumption. Therefore, it may sometimes produce poor sensitivity coefficients since the number of particles in each active cycle is not sufficient enough due to the limitation of computer memory capacity. In this work, two algorithms of the Contribution-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) method, namely, the collision-event-based algorithm (C-CLUTCH) which is also implemented in SCALE and the fission-event-based algorithm (F-CLUTCH) which is put forward in this work, are investigated and implemented in RMC to reduce memory requirements for computing eigenvalue sensitivity coefficients. While the C-CLUTCH algorithm requires to store concerning reaction rates of every collision, the F-CLUTCH algorithm only stores concerning reaction rates of every fission point. In addition, the fission matrix method is put forward to generate the adjoint fission source distribution for the CLUTCH method to compute sensitivity coefficients. These newly proposed approaches implemented in RMC code are verified by a SF96 lattice model and the MIT BEAVRS benchmark problem. The numerical results indicate the accuracy of the F-CLUTCH algorithm is the same as the C

  9. SU-F-I-43: A Software-Based Statistical Method to Compute Low Contrast Detectability in Computed Tomography Images

    Energy Technology Data Exchange (ETDEWEB)

    Chacko, M; Aldoohan, S [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States)

    2016-06-15

    Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended under simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical CT

  10. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  11. A Review of Computational Methods to Predict the Risk of Rupture of Abdominal Aortic Aneurysms

    Directory of Open Access Journals (Sweden)

    Tejas Canchi

    2015-01-01

    Full Text Available Computational methods have played an important role in health care in recent years, as determining parameters that affect a certain medical condition is not possible in experimental conditions in many cases. Computational fluid dynamics (CFD methods have been used to accurately determine the nature of blood flow in the cardiovascular and nervous systems and air flow in the respiratory system, thereby giving the surgeon a diagnostic tool to plan treatment accordingly. Machine learning or data mining (MLD methods are currently used to develop models that learn from retrospective data to make a prediction regarding factors affecting the progression of a disease. These models have also been successful in incorporating factors such as patient history and occupation. MLD models can be used as a predictive tool to determine rupture potential in patients with abdominal aortic aneurysms (AAA along with CFD-based prediction of parameters like wall shear stress and pressure distributions. A combination of these computer methods can be pivotal in bridging the gap between translational and outcomes research in medicine. This paper reviews the use of computational methods in the diagnosis and treatment of AAA.

  12. Computational Methods for Inviscid and Viscous Two-and-Three-Dimensional Flow Fields.

    Science.gov (United States)

    1975-01-01

    Difference Equations Over a Network, Watson Sei. Comput. Lab. Report, 19U9. 173- Isaacson, E. and Keller, H. B., Analaysis of Numerical Methods...element method has given a new impulse to the old mathematical theory of multivariate interpolation. We first study the one-dimensional case, which

  13. 2nd International Conference on Multiscale Computational Methods for Solids and Fluids

    CERN Document Server

    2016-01-01

    This volume contains the best papers presented at the 2nd ECCOMAS International Conference on Multiscale Computations for Solids and Fluids, held June 10-12, 2015. Topics dealt with include multiscale strategy for efficient development of scientific software for large-scale computations, coupled probability-nonlinear-mechanics problems and solution methods, and modern mathematical and computational setting for multi-phase flows and fluid-structure interaction. The papers consist of contributions by six experts who taught short courses prior to the conference, along with several selected articles from other participants dealing with complementary issues, covering both solid mechanics and applied mathematics. .

  14. Reconstruction of computed tomographic image from a few x-ray projections by means of accelerative gradient method

    International Nuclear Information System (INIS)

    Kobayashi, Fujio; Yamaguchi, Shoichiro

    1982-01-01

    A method of the reconstruction of computed tomographic images was proposed to reduce the exposure dose to X-ray. The method is the small number of X-ray projection method by accelerative gradient method. The procedures of computation are described. The algorithm of these procedures is simple, the convergence of the computation is fast, and the required memory capacity is small. Numerical simulation was carried out to conform the validity of this method. A sample of simple shape was considered, projection data were given, and the images were reconstructed from 6 views. Good results were obtained, and the method is considered to be useful. (Kato, T.)

  15. Non-linear heat transfer computer code by finite element method

    International Nuclear Information System (INIS)

    Nagato, Kotaro; Takikawa, Noboru

    1977-01-01

    The computer code THETA-2D for the calculation of temperature distribution by the two-dimensional finite element method was made for the analysis of heat transfer in a high temperature structure. Numerical experiment was performed for the numerical integration of the differential equation of heat conduction. The Runge-Kutta method of the numerical experiment produced an unstable solution. A stable solution was obtained by the β method with the β value of 0.35. In high temperature structures, the radiative heat transfer can not be neglected. To introduce a term of the radiative heat transfer, a functional neglecting the radiative heat transfer was derived at first. Then, the radiative term was added after the discretion by variation method. Five model calculations were carried out by the computer code. Calculation of steady heat conduction was performed. When estimated initial temperature is 1,000 degree C, reasonable heat blance was obtained. In case of steady-unsteady temperature calculation, the time integral by THETA-2D turned out to be under-estimation for enthalpy change. With a one-dimensional model, the temperature distribution in a structure, in which heat conductivity is dependent on temperature, was calculated. Calculation with a model which has a void inside was performed. Finally, model calculation for a complex system was carried out. (Kato, T.)

  16. A Modified Computational Scheme for the Stochastic Perturbation Finite Element Method

    Directory of Open Access Journals (Sweden)

    Feng Wu

    Full Text Available Abstract A modified computational scheme of the stochastic perturbation finite element method (SPFEM is developed for structures with low-level uncertainties. The proposed scheme can provide second-order estimates of the mean and variance without differentiating the system matrices with respect to the random variables. When the proposed scheme is used, it involves finite analyses of deterministic systems. In the case of one random variable with a symmetric probability density function, the proposed computational scheme can even provide a result with fifth-order accuracy. Compared with the traditional computational scheme of SPFEM, the proposed scheme is more convenient for numerical implementation. Four numerical examples demonstrate that the proposed scheme can be used in linear or nonlinear structures with correlated or uncorrelated random variables.

  17. Homogenized parameters of light water fuel elements computed by a perturbative (perturbation) method

    International Nuclear Information System (INIS)

    Koide, Maria da Conceicao Michiyo

    2000-01-01

    A new analytic formulation for material parameters homogenization of the two dimensional and two energy-groups diffusion model has been successfully used as a fast computational tool for recovering the detailed group fluxes in full reactor cores. The homogenization method which has been proposed does not require the solution of the diffusion problem by a numerical method. As it is generally recognized that currents at assembly boundaries must be computed accurately, a simple numerical procedure designed to improve the values of currents obtained by nodal calculations is also presented. (author)

  18. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

    Science.gov (United States)

    Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

    2010-05-04

    A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

  19. Review methods for image segmentation from computed tomography images

    International Nuclear Information System (INIS)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-01-01

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan

  20. Computer Aided Flowsheet Design using Group Contribution Methods

    DEFF Research Database (Denmark)

    Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul

    2011-01-01

    In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent...... information of each flowsheet to minimize the computational load and information storage. The design variables for the selected flowsheet(s) are identified through a reverse simulation approach and are used as initial estimates for rigorous simulation to verify the feasibility and performance of the design....

  1. Computation of tightly-focused laser beams in the FDTD method.

    Science.gov (United States)

    Capoğlu, Ilker R; Taflove, Allen; Backman, Vadim

    2013-01-14

    We demonstrate how a tightly-focused coherent TEMmn laser beam can be computed in the finite-difference time-domain (FDTD) method. The electromagnetic field around the focus is decomposed into a plane-wave spectrum, and approximated by a finite number of plane waves injected into the FDTD grid using the total-field/scattered-field (TF/SF) method. We provide an error analysis, and guidelines for the discrete approximation. We analyze the scattering of the beam from layered spaces and individual scatterers. The described method should be useful for the simulation of confocal microscopy and optical data storage. An implementation of the method can be found in our free and open source FDTD software ("Angora").

  2. Parallel performances of three 3D reconstruction methods on MIMD computers: Feldkamp, block ART and SIRT algorithms

    International Nuclear Information System (INIS)

    Laurent, C.; Chassery, J.M.; Peyrin, F.; Girerd, C.

    1996-01-01

    This paper deals with the parallel implementations of reconstruction methods in 3D tomography. 3D tomography requires voluminous data and long computation times. Parallel computing, on MIMD computers, seems to be a good approach to manage this problem. In this study, we present the different steps of the parallelization on an abstract parallel computer. Depending on the method, we use two main approaches to parallelize the algorithms: the local approach and the global approach. Experimental results on MIMD computers are presented. Two 3D images reconstructed from realistic data are showed

  3. Computer codes and methods for simulating accelerator driven systems

    International Nuclear Information System (INIS)

    Sartori, E.; Byung Chan Na

    2003-01-01

    A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)

  4. Quantum computers and quantum computations

    International Nuclear Information System (INIS)

    Valiev, Kamil' A

    2005-01-01

    This review outlines the principles of operation of quantum computers and their elements. The theory of ideal computers that do not interact with the environment and are immune to quantum decohering processes is presented. Decohering processes in quantum computers are investigated. The review considers methods for correcting quantum computing errors arising from the decoherence of the state of the quantum computer, as well as possible methods for the suppression of the decohering processes. A brief enumeration of proposed quantum computer realizations concludes the review. (reviews of topical problems)

  5. Computational methods for coupling microstructural and micromechanical materials response simulations

    Energy Technology Data Exchange (ETDEWEB)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  6. Comparison of microscopic method and computational program for pesticide deposition evaluation of spraying

    Directory of Open Access Journals (Sweden)

    Chaim Aldemir

    2002-01-01

    Full Text Available The main objective of this work was to compare two methods to estimate the deposition of pesticide applied by aerial spraying. Hundred and fifty pieces of water sensitive paper were distributed over an area of 50 m length by 75 m width for sampling droplets sprayed by an aircraft calibrated to apply a spray volume of 32 L/ha. The samples were analysed by visual microscopic method using NG 2 Porton graticule and by an image analyser computer program. The results reached by visual microscopic method were the following: volume median diameter, 398±62 mum; number median diameter, 159±22 mum; droplet density, 22.5±7.0 droplets/cm² and estimated deposited volume, 22.2±9.4 L/ha. The respective ones reached with the computer program were: 402±58 mum, 161±32 mum, 21.9±7.5 droplets/cm² and 21.9±9.2 L/ha. Graphs of the spatial distribution of droplet density and deposited spray volume on the area were produced by the computer program.

  7. A surface capturing method for the efficient computation of steady water waves

    NARCIS (Netherlands)

    Wackers, J.; Koren, B.

    2008-01-01

    A surface capturing method is developed for the computation of steady water–air flow with gravity. Fluxes are based on artificial compressibility and the method is solved with a multigrid technique and line Gauss–Seidel smoother. A test on a channel flow with a bottom bump shows the accuracy of the

  8. A Computer Game-Based Method for Studying Bullying and Cyberbullying

    Science.gov (United States)

    Mancilla-Caceres, Juan F.; Espelage, Dorothy; Amir, Eyal

    2015-01-01

    Even though previous studies have addressed the relation between face-to-face bullying and cyberbullying, none have studied both phenomena simultaneously. In this article, we present a computer game-based method to study both types of peer aggression among youth. Study participants included fifth graders (N = 93) in two U.S. Midwestern middle…

  9. Computer game-based and traditional learning method: a comparison regarding students' knowledge retention.

    Science.gov (United States)

    Rondon, Silmara; Sassi, Fernanda Chiarion; Furquim de Andrade, Claudia Regina

    2013-02-25

    Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students' prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students' performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students' short and long-term knowledge retention.

  10. A computer method for simulating the decay of radon daughters

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1988-01-01

    The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure

  11. Curvature computation in volume-of-fluid method based on point-cloud sampling

    Science.gov (United States)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  12. Numerical methods and computers used in elastohydrodynamic lubrication

    Science.gov (United States)

    Hamrock, B. J.; Tripp, J. H.

    1982-01-01

    Some of the methods of obtaining approximate numerical solutions to boundary value problems that arise in elastohydrodynamic lubrication are reviewed. The highlights of four general approaches (direct, inverse, quasi-inverse, and Newton-Raphson) are sketched. Advantages and disadvantages of these approaches are presented along with a flow chart showing some of the details of each. The basic question of numerical stability of the elastohydrodynamic lubrication solutions, especially in the pressure spike region, is considered. Computers used to solve this important class of lubrication problems are briefly described, with emphasis on supercomputers.

  13. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    Science.gov (United States)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  14. Work in process level definition: a method based on computer simulation and electre tri

    Directory of Open Access Journals (Sweden)

    Isaac Pergher

    2014-09-01

    Full Text Available This paper proposes a method for defining the levels of work in progress (WIP in productive environments managed by constant work in process (CONWIP policies. The proposed method combines the approaches of Computer Simulation and Electre TRI to support estimation of the adequate level of WIP and is presented in eighteen steps. The paper also presents an application example, performed on a metalworking company. The research method is based on Computer Simulation, supported by quantitative data analysis. The main contribution of the paper is its provision of a structured way to define inventories according to demand. With this method, the authors hope to contribute to the establishment of better capacity plans in production environments.

  15. System and method for controlling power consumption in a computer system based on user satisfaction

    Science.gov (United States)

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  16. Phenomenography and Grounded Theory as Research Methods in Computing Education Research Field

    Science.gov (United States)

    Kinnunen, Paivi; Simon, Beth

    2012-01-01

    This paper discusses two qualitative research methods, phenomenography and grounded theory. We introduce both methods' data collection and analysis processes and the type or results you may get at the end by using examples from computing education research. We highlight some of the similarities and differences between the aim, data collection and…

  17. Methodics of computing the results of monitoring the exploratory gallery

    Directory of Open Access Journals (Sweden)

    Krúpa Víazoslav

    2000-09-01

    Full Text Available At building site of motorway tunnel Višòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest – Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Višòové – Dubná skala motorway tunnel.

  18. Computer game-based and traditional learning method: a comparison regarding students’ knowledge retention

    Directory of Open Access Journals (Sweden)

    Rondon Silmara

    2013-02-01

    Full Text Available Abstract Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method, short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention.

  19. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    International Nuclear Information System (INIS)

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  20. Novel computational methods to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-12-01

    Computational drug repurposing aims at finding new medical uses for existing drugs. The identification of novel drug-target interactions (DTIs) can be a useful part of such a task. Computational determination of DTIs is a convenient strategy for systematic screening of a large number of drugs in the attempt to identify new DTIs at low cost and with reasonable accuracy. This necessitates development of accurate computational methods that can help focus on the follow-up experimental validation on a smaller number of highly likely targets for a drug. Although many methods have been proposed for computational DTI prediction, they suffer the high false positive prediction rate or they do not predict the effect that drugs exert on targets in DTIs. In this report, first, we present a comprehensive review of the recent progress in the field of DTI prediction from data-centric and algorithm-centric perspectives. The aim is to provide a comprehensive review of computational methods for identifying DTIs, which could help in constructing more reliable methods. Then, we present DDR, an efficient method to predict the existence of DTIs. DDR achieves significantly more accurate results compared to the other state-of-theart methods. As supported by independent evidences, we verified as correct 22 out of the top 25 DDR DTIs predictions. This validation proves the practical utility of DDR, suggesting that DDR can be used as an efficient method to identify 5 correct DTIs. Finally, we present DDR-FE method that predicts the effect types of a drug on its target. On different representative datasets, under various test setups, and using different performance measures, we show that DDR-FE achieves extremely good performance. Using blind test data, we verified as correct 2,300 out of 3,076 DTIs effects predicted by DDR-FE. This suggests that DDR-FE can be used as an efficient method to identify correct effects of a drug on its target.

  1. A method for the computation of turbulent polymeric liquids including hydrodynamic interactions and chain entanglements

    Energy Technology Data Exchange (ETDEWEB)

    Kivotides, Demosthenes, E-mail: demosthenes.kivotides@strath.ac.uk

    2017-02-12

    An asymptotically exact method for the direct computation of turbulent polymeric liquids that includes (a) fully resolved, creeping microflow fields due to hydrodynamic interactions between chains, (b) exact account of (subfilter) residual stresses, (c) polymer Brownian motion, and (d) direct calculation of chain entanglements, is formulated. Although developed in the context of polymeric fluids, the method is equally applicable to turbulent colloidal dispersions and aerosols. - Highlights: • An asymptotically exact method for the computation of polymer and colloidal fluids is developed. • The method is valid for all flow inertia and all polymer volume fractions. • The method models entanglements and hydrodynamic interactions between polymer chains.

  2. Computation of solution equilibria: A guide to methods in potentiometry, extraction, and spectrophotometry

    International Nuclear Information System (INIS)

    Meloun, M.; Havel, J.; Hogfeldt, E.

    1988-01-01

    Although this book contains a very good review of computation methods applicable to equilibrium systems, most of the book is dedicated to the description and evaluation of computer programs available for doing such calculations. As stated in the preface, the authors (two computniks and a user of graphical and computer methods) have joined forces in order to present the reader with the points of view of both the creator and user of modern computer program tools available for the study of solution equilibria. The successful presentation of such a complicated amalgamation of concepts is greatly aided by the structure of the book, which begins with a brief but thorough discussion of equilibrium concepts in general, followed by an equally brief discussion of experimental methods used to study equilibria with potentiometric, extraction, and spectroscopic methods. These sections would not be sufficient to teach these topics to the beginner but offer an informative presentation of concepts in relation to one another to those already familiar with basic equilibrium concepts. The importance of evaluating and analyzing the suitability of data for further analysis is then presented before an in depth (by a chemist's standards) look at the individual parts that make up a detailed equilibrium analysis program. The next one-third of the book is an examination of specific equilibrium problems and the programs available to study them. These are divided into chapters devoted to potentiometric, extraction, and spectroscopic methods. The format is to discuss a variety of programs, one at a time, including the parts of the program, the types of problems to which it has been applied, and the program's limitations. A number of problems are then presented, which are representative of the type of questions that are normally addressed by research projects in the area

  3. MODELUL PROIECTĂRII TRASEELOR INDIVIDUALE DE ÎNVĂŢARE ALE STUDENŢILOR ÎN MEDII DIGITALE

    Directory of Open Access Journals (Sweden)

    Ghenadie CABAC

    2017-03-01

    Full Text Available Abordarea individuală a instruirii, deşi permite a adapta procesul de formare la particularităţile individuale ale stu­denţilor, rămâne o abordare centrată pe profesor, iar calea după care studenţii parcurg o disciplină universitară (traseul de învăţare este una şi aceeaşi. Pentru a implica plenar studenţii în activitatea de învăţare, didactica modernă propune individualizarea instruirii prin construirea de către student a propriului program de formare. În lucrare este descris modelul proiectării traseelor individuale de învăţare ca activitate comună a cadrului didactic şi a studentului.MODEL DESIGN OF INDIVIDUAL LEARNING PATHWAY STUDENTS IN DIGITAL MEDIAAlthough individual approach to training allows to adapt the teaching process to students’ individual character­ristics, it remains a teacher-centered approach, but the path students follow while acquiring an academic discipline (learning path is the same. To fully engage students in learning activities, modern didactics individualizes instruct­tion by building the student's own training programme. The paper describes the model of designing individual learning paths as a joint activity between the teacher and the student.

  4. Parallel computing of a climate model on the dawn 1000 by domain decomposition method

    Science.gov (United States)

    Bi, Xunqiang

    1997-12-01

    In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.

  5. Computational Methods for Nanoscale X-ray Computed Tomography Image Analysis of Fuel Cell and Battery Materials

    Science.gov (United States)

    Kumar, Arjun S.

    Over the last fifteen years, there has been a rapid growth in the use of high resolution X-ray computed tomography (HRXCT) imaging in material science applications. We use it at nanoscale resolutions up to 50 nm (nano-CT) for key research problems in large scale operation of polymer electrolyte membrane fuel cells (PEMFC) and lithium-ion (Li-ion) batteries in automotive applications. PEMFC are clean energy sources that electrochemically react with hydrogen gas to produce water and electricity. To reduce their costs, capturing their electrode nanostructure has become significant in modeling and optimizing their performance. For Li-ion batteries, a key challenge in increasing their scope for the automotive industry is Li metal dendrite growth. Li dendrites are structures of lithium with 100 nm features of interest that can grow chaotically within a battery and eventually lead to a short-circuit. HRXCT imaging is an effective diagnostics tool for such applications as it is a non-destructive method of capturing the 3D internal X-ray absorption coefficient of materials from a large series of 2D X-ray projections. Despite a recent push to use HRXCT for quantitative information on material samples, there is a relative dearth of computational tools in nano-CT image processing and analysis. Hence, we focus on developing computational methods for nano-CT image analysis of fuel cell and battery materials as required by the limitations in material samples and the imaging environment. The first problem we address is the segmentation of nano-CT Zernike phase contrast images. Nano-CT instruments are equipped with Zernike phase contrast optics to distinguish materials with a low difference in X-ray absorption coefficient by phase shifting the X-ray wave that is not diffracted by the sample. However, it creates image artifacts that hinder the use of traditional image segmentation techniques. To restore such images, we setup an inverse problem by modeling the X-ray phase contrast

  6. 3D Fluid-Structure Interaction Simulation of Aortic Valves Using a Unified Continuum ALE FEM Model

    Directory of Open Access Journals (Sweden)

    Jeannette H. Spühler

    2018-04-01

    Full Text Available Due to advances in medical imaging, computational fluid dynamics algorithms and high performance computing, computer simulation is developing into an important tool for understanding the relationship between cardiovascular diseases and intraventricular blood flow. The field of cardiac flow simulation is challenging and highly interdisciplinary. We apply a computational framework for automated solutions of partial differential equations using Finite Element Methods where any mathematical description directly can be translated to code. This allows us to develop a cardiac model where specific properties of the heart such as fluid-structure interaction of the aortic valve can be added in a modular way without extensive efforts. In previous work, we simulated the blood flow in the left ventricle of the heart. In this paper, we extend this model by placing prototypes of both a native and a mechanical aortic valve in the outflow region of the left ventricle. Numerical simulation of the blood flow in the vicinity of the valve offers the possibility to improve the treatment of aortic valve diseases as aortic stenosis (narrowing of the valve opening or regurgitation (leaking and to optimize the design of prosthetic heart valves in a controlled and specific way. The fluid-structure interaction and contact problem are formulated in a unified continuum model using the conservation laws for mass and momentum and a phase function. The discretization is based on an Arbitrary Lagrangian-Eulerian space-time finite element method with streamline diffusion stabilization, and it is implemented in the open source software Unicorn which shows near optimal scaling up to thousands of cores. Computational results are presented to demonstrate the capability of our framework.

  7. 3D Fluid-Structure Interaction Simulation of Aortic Valves Using a Unified Continuum ALE FEM Model.

    Science.gov (United States)

    Spühler, Jeannette H; Jansson, Johan; Jansson, Niclas; Hoffman, Johan

    2018-01-01

    Due to advances in medical imaging, computational fluid dynamics algorithms and high performance computing, computer simulation is developing into an important tool for understanding the relationship between cardiovascular diseases and intraventricular blood flow. The field of cardiac flow simulation is challenging and highly interdisciplinary. We apply a computational framework for automated solutions of partial differential equations using Finite Element Methods where any mathematical description directly can be translated to code. This allows us to develop a cardiac model where specific properties of the heart such as fluid-structure interaction of the aortic valve can be added in a modular way without extensive efforts. In previous work, we simulated the blood flow in the left ventricle of the heart. In this paper, we extend this model by placing prototypes of both a native and a mechanical aortic valve in the outflow region of the left ventricle. Numerical simulation of the blood flow in the vicinity of the valve offers the possibility to improve the treatment of aortic valve diseases as aortic stenosis (narrowing of the valve opening) or regurgitation (leaking) and to optimize the design of prosthetic heart valves in a controlled and specific way. The fluid-structure interaction and contact problem are formulated in a unified continuum model using the conservation laws for mass and momentum and a phase function. The discretization is based on an Arbitrary Lagrangian-Eulerian space-time finite element method with streamline diffusion stabilization, and it is implemented in the open source software Unicorn which shows near optimal scaling up to thousands of cores. Computational results are presented to demonstrate the capability of our framework.

  8. Application of Computer-Assisted Learning Methods in the Teaching of Chemical Spectroscopy.

    Science.gov (United States)

    Ayscough, P. B.; And Others

    1979-01-01

    Discusses the application of computer-assisted learning methods to the interpretation of infrared, nuclear magnetic resonance, and mass spectra; and outlines extensions into the area of integrated spectroscopy. (Author/CMV)

  9. COMPUTER TOOLS OF DYNAMIC MATHEMATIC SOFTWARE AND METHODICAL PROBLEMS OF THEIR USE

    Directory of Open Access Journals (Sweden)

    Olena V. Semenikhina

    2014-08-01

    Full Text Available The article presents results of analyses of standard computer tools of dynamic mathematic software which are used in solving tasks, and tools on which the teacher can support in the teaching of mathematics. Possibility of the organization of experimental investigating of mathematical objects on the basis of these tools and the wording of new tasks on the basis of the limited number of tools, fast automated check are specified. Some methodological comments on application of computer tools and methodological features of the use of interactive mathematical environments are presented. Problems, which are arising from the use of computer tools, among which rethinking forms and methods of training by teacher, the search for creative problems, the problem of rational choice of environment, check the e-solutions, common mistakes in the use of computer tools are selected.

  10. Statistical noise with the weighted backprojection method for single photon emission computed tomography

    International Nuclear Information System (INIS)

    Murayama, Hideo; Tanaka, Eiichi; Toyama, Hinako.

    1985-01-01

    The weighted backprojection (WBP) method and the radial post-correction (RPC) method were compared with other several attenuation correction methods for single photon emission computed tomography by computer simulation. These methods are the pre-correction method with arithmetic means of opposing projections, the post-correction method with a correction matrix, and the inverse attenuated Randon transform method. Statistical mean square noise in a reconstructed image was formulated, and was displayed two-dimensionally for typical simulated phantoms. The noise image for the WBP method was dependent on several parameters, namely, size of an attenuating object, distribution of activity, the attenuation coefficient, and choise of the reconstruction index, k and position of the reconstruction origin. The noise image for the WBP method with k=0 was almost the same for the RPC method. It has been shown that position of the reconstruction origin has to be chosen appropriately in order to improve the noise properties of the reconstructed image for the WBP method as well as the RPC method. Comparision of the different attenuation correction methods accomplished by using both the reconstructed images and the statistical noise images with the same mathematical phantom and convolving function concluded that the WBP method and the RPC method were more amenable to any radioisotope distributions than the other methods, and had the advantage of flexibility to improve image noise of any local positions. (author)

  11. Computation of short-time diffusion using the particle simulation method

    International Nuclear Information System (INIS)

    Janicke, L.

    1983-01-01

    The method of particle simulation allows a correct description of turbulent diffusion even in areas near the source and the computation of overall average values (anticipated values). The model is suitable for dealing with complex situation. It is derived from the K-model which describes the dispersion of noxious matter using the diffusion formula. (DG) [de

  12. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    Science.gov (United States)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  13. A New Energy-Based Method for 3-D Finite-Element Nonlinear Flux Linkage computation of Electrical Machines

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen

    2011-01-01

    This paper presents a new method for computation of the nonlinear flux linkage in 3-D finite-element models (FEMs) of electrical machines. Accurate computation of the nonlinear flux linkage in 3-D FEM is not an easy task. Compared to the existing energy-perturbation method, the new technique......-perturbation method. The new method proposed is validated using experimental results on two different permanent magnet machines....

  14. CONTRADICTORIALITATEA ÎN CORAPORT CU ALTE PRINCIPII ALE PROCESULUI PENAL

    Directory of Open Access Journals (Sweden)

    Lucia RUSU

    2016-03-01

    Full Text Available În legătură cu reformarea sistemului judiciar şi schimbările intervenite în viaţa social-politică a statului nostru, prin­cipiul contradictorialităţii a obţinut o nouă rezonanţă din considerentul că reforma judiciară şi de drept este legată direct de contradictorialitate. Reforma legii procesual penale trebuie să fie fundamentată pe o temelie teoretică solidă. Contra­dictorialitatea, însă, în calitate de noţiune juridică, este insuficient cercetată în doctrina dreptului procesual penal. La ziua de azi, specialişti notorii în domeniul dreptului procesual penal analizează şi studiază importanţa fundamentelor şi principiilor de bază ale procesului penal şi, în primul rând, contradictorialitatea acestuia. Legea procesual penală a Republicii Moldova cunoaşte o evoluţie şi dezvoltate în sensul democratizării şi lărgirii începuturilor contradictoriale în înfăptuirea justiţiei. Aceasta e şi firesc, deoarece contradictorialitatea are o importanţă enormă pentru întregul sistem al procesului penal, determinând în mare parte statutul juridic şi raporturile dintre participanţii la procesul penal, precum şi relaţiile juridice stabilite între participanţii la acest proces şi instanţa de judecată. CONTRADICTION AND ITS CORRELATION WITH OTHER PRINCIPLES OF THE CRIMINAL PROCEEDINGIn connection with the judiciary system reforming and changes in socio-political life of our state, the adversarial principle has gained a new resonance on the grounds that the judicial and legal reform is directly linked to adversariality. The reform of the criminal procedure law must be based on solid theoretical foundation. However, adversariality, as legal concept, is not enough investigated in the doctrine of the criminal procedure law. Currently, notorious specialists in the field of criminal procedure law examine and study the importance of fundamentals and basic principles of the criminal process and

  15. Evaluation of non-volatile metabolites in beer stored at high temperature and utility as an accelerated method to predict flavour stability.

    Science.gov (United States)

    Heuberger, Adam L; Broeckling, Corey D; Sedin, Dana; Holbrook, Christian; Barr, Lindsay; Kirkpatrick, Kaylyn; Prenni, Jessica E

    2016-06-01

    Flavour stability is vital to the brewing industry as beer is often stored for an extended time under variable conditions. Developing an accelerated model to evaluate brewing techniques that affect flavour stability is an important area of research. Here, we performed metabolomics on non-volatile compounds in beer stored at 37 °C between 1 and 14 days for two beer types: an amber ale and an India pale ale. The experiment determined high temperature to influence non-volatile metabolites, including the purine 5-methylthioadenosine (5-MTA). In a second experiment, three brewing techniques were evaluated for improved flavour stability: use of antioxidant crowns, chelation of pro-oxidants, and varying plant content in hops. Sensory analysis determined the hop method was associated with improved flavour stability, and this was consistent with reduced 5-MTA at both regular and high temperature storage. Future studies are warranted to understand the influence of 5-MTA on flavour and aging within different beer types. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. transformada de Fourier en tiempos cortos utilizada en el análisis de señales de vibración para determinar planos en las ruedas de un tren

    Directory of Open Access Journals (Sweden)

    Elkin Flórez

    2009-01-01

    Full Text Available Uno de los problemas comunes que se presentan en las ruedas de los trenes, es la presencia de planos. Éstos generan un impacto suficiente como para afectar el funcionamiento normal del tren. La detección temprana de planos en las ruedas permite realizar las correcciones necesarias (tornear la superficie de la rueda a fin de evitar daños a los componentes del tren que degraden la prestación del servicio a los usuarios. Aunque existen muchos sensores de vibración en el mercado para detectar vibraciones generadas por el paso de un tren, no hay aún una herramienta estándar que permitan detectar la presencia de planos en las ruedas del mismo. Este estudio presenta la selección apropiada de una ventana temporal para utilizar la Transformada de Fourier en Tiempos Cortos (STFT, por sus siglas en inglés en el análisis de señales de vibración, tomadas al pie del carril y generadas al paso de un tren, que permita determinar la presencia de dichos planos. Para ello, en primer lugar se generó, utilizando la herramienta Matlab, una señal que simule la presencia de un plano y que permita conocer como la STFT, implementando diferentes ventanas temporales (Rectangular, Gauss, Hanning y Chebyshev, permite descubrir la presencia del mismo, en el dominio conjunto tiempo-frecuencia. Seguidamente, se aplica la STFT a señales tomadas en campo. Los resultados obtenidos demuestran que la STFT es una herramienta efectiva para detectar planos en la rueda de los trenes, si la función ventana y sus parámetros se seleccionan correctamente al realizar un análisis tiempo-frecuencia.

  17. Multigrid Methods for the Computation of Propagators in Gauge Fields

    Science.gov (United States)

    Kalkreuter, Thomas

    Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.

  18. Computational Methods for ChIP-seq Data Analysis and Applications

    KAUST Repository

    Ashoor, Haitham

    2017-04-25

    The development of Chromatin immunoprecipitation followed by sequencing (ChIP-seq) technology has enabled the construction of genome-wide maps of protein-DNA interaction. Such maps provide information about transcriptional regulation at the epigenetic level (histone modifications and histone variants) and at the level of transcription factor (TF) activity. This dissertation presents novel computational methods for ChIP-seq data analysis and applications. The work of this dissertation addresses four main challenges. First, I address the problem of detecting histone modifications from ChIP-seq cancer samples. The presence of copy number variations (CNVs) in cancer samples results in statistical biases that lead to inaccurate predictions when standard methods are used. To overcome this issue I developed HMCan, a specially designed algorithm to handle ChIP-seq cancer data by accounting for the presence of CNVs. When using ChIP-seq data from cancer cells, HMCan demonstrates unbiased and accurate predictions compared to the standard state of the art methods. Second, I address the problem of identifying changes in histone modifications between two ChIP-seq samples with different genetic backgrounds (for example cancer vs. normal). In addition to CNVs, different antibody efficiency between samples and presence of samples replicates are challenges for this problem. To overcome these issues, I developed the HMCan-diff algorithm as an extension to HMCan. HMCan-diff implements robust normalization methods to address the challenges listed above. HMCan-diff significantly outperforms another state of the art methods on data containing cancer samples. Third, I investigate and analyze predictions of different methods for enhancer prediction based on ChIP-seq data. The analysis shows that predictions generated by different methods are poorly overlapping. To overcome this issue, I developed DENdb, a database that integrates enhancer predictions from different methods. DENdb also

  19. A comparison of efficient methods for the computation of Born gluon amplitudes

    International Nuclear Information System (INIS)

    Dinsdale, Michael; Ternick, Marko; Weinzierl, Stefan

    2006-01-01

    We compare four different methods for the numerical computation of the pure gluonic amplitudes in the Born approximation. We are in particular interested in the efficiency of the various methods as the number n of the external particles increases. In addition we investigate the numerical accuracy in critical phase space regions. The methods considered are based on (i) Berends-Giele recurrence relations, (ii) scalar diagrams, (iii) MHV vertices and (iv) BCF recursion relations

  20. Analysis of Protein by Spectrophotometric and Computer Colour Based Intensity Method from Stem of Pea (Pisum sativum at Different Stages

    Directory of Open Access Journals (Sweden)

    Afsheen Mushtaque Shah

    2010-12-01

    Full Text Available In this study proteins were analyzed from pea plants at three different growth stages of stem by spectrophotometric i.e Lowry and Bradford quantitative methods and computer colour intensity based method. Though Spectrophotometric methods are regarded as classical methods, we report an alternate computer based method which gave comparable results. Computer software was developed the for protein analysis which is easier, time and money saving method as compared to the classical methods.