WorldWideScience

Sample records for accurate computational approach

  1. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.

  2. Accurate paleointensities - the multi-method approach

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  3. INDUS - a composition-based approach for rapid and accurate taxonomic classification of metagenomic sequences

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Reddy, Rachamalla Maheedhar; Reddy, Chennareddy Venkata Siva Kumar; Singh, Nitin Kumar; Sharmila S Mande

    2011-01-01

    Background Taxonomic classification of metagenomic sequences is the first step in metagenomic analysis. Existing taxonomic classification approaches are of two types, similarity-based and composition-based. Similarity-based approaches, though accurate and specific, are extremely slow. Since, metagenomic projects generate millions of sequences, adopting similarity-based approaches becomes virtually infeasible for research groups having modest computational resources. In this study, we present ...

  4. Combinatorial Approaches to Accurate Identification of Orthologous Genes

    Shi, Guanqun

    2011-01-01

    The accurate identification of orthologous genes across different species is a critical and challenging problem in comparative genomics and has a wide spectrum of biological applications including gene function inference, evolutionary studies and systems biology. During the past several years, many methods have been proposed for ortholog assignment based on sequence similarity, phylogenetic approaches, synteny information, and genome rearrangement. Although these methods share many commonly a...

  5. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  6. An Integrative Approach to Accurate Vehicle Logo Detection

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  7. Computational approaches to vision

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  8. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  9. Multidetector row computed tomography may accurately estimate plaque vulnerability: does MDCT accurately estimate plaque vulnerability? (Pro).

    Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa

    2011-01-01

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal ECG and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. PMID:21532180

  10. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  11. Compiler for Fast, Accurate Mathematical Computing on Integer Processors Project

    National Aeronautics and Space Administration — The proposers will develop a computer language compiler to enable inexpensive, low-power, integer-only processors to carry our mathematically-intensive...

  12. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab

  13. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  14. Accurate FOFEM computations and ray tracing in particle optics

    Lencová, Bohumila

    Oxford: Institute of Physics, 2004, s. 169-172. ISBN 0-750-30967-9. [EMAG 2003 Electron Microscopy and Analysis Group Conference. Oxford (GB), 03.09.2003-05.09.2003] R&D Projects: GA ČR GA202/03/1575 Keywords : finite element method * magnetic electron lenses * accuracy of computation Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  15. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  16. On accurate computations of bound state properties in three- and four-electron atomic systems

    Frolov, Alexei M

    2016-01-01

    Results of accurate computations of bound states in three- and four-electron atomic systems are discussed. Bound state properties of the four-electron lithium ion Li$^{-}$ in its ground $2^{2}S-$state are determined from the results of accurate, variational computations. We also consider a closely related problem of accurate numerical evaluation of the half-life of the beryllium-7 isotope. This problem is of paramount importance for modern radiochemistry.

  17. A programming approach to computability

    Kfoury, A J; Arbib, Michael A

    1982-01-01

    Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...

  18. Elliptic curves a computational approach

    Schmitt, Susanne; Pethö, Attila

    2003-01-01

    The basics of the theory of elliptic curves should be known to everybody, be he (or she) a mathematician or a computer scientist. Especially everybody concerned with cryptography should know the elements of this theory. The purpose of the present textbook is to give an elementary introduction to elliptic curves. Since this branch of number theory is particularly accessible to computer-assisted calculations, the authors make use of it by approaching the theory under a computational point of view. Specifically, the computer-algebra package SIMATH can be applied on several occasions. However, the book can be read also by those not interested in any computations. Of course, the theory of elliptic curves is very comprehensive and becomes correspondingly sophisticated. That is why the authors made a choice of the topics treated. Topics covered include the determination of torsion groups, computations regarding the Mordell-Weil group, height calculations, S-integral points. The contents is kept as elementary as poss...

  19. A semi-empirical approach to accurate standard enthalpies of formation for solid hydrides

    Klaveness, A. [Department of Chemistry, University of Oslo, P.O. Box 1033, Blindern, N-0315 Oslo (Norway)], E-mail: arnekla@kjemi.uio.no; Fjellvag, H.; Kjekshus, A.; Ravindran, P. [Department of Chemistry, University of Oslo, P.O. Box 1033, Blindern, N-0315 Oslo (Norway); Swang, O. [SINTEF Materials and Chemistry, P.O. Box 124, Blindern, N-0314 Oslo (Norway)

    2009-02-05

    A semi-empirical method for estimation of enthalpies of formation of solid hydrides is proposed. The method is named Ionic for short. By combining experimentally known enthalpies of formation for simple hydrides and reaction energies computed using band-structure density functional theory (DFT) methods, startling accurate results can be achieved. The approach relies on cancellation of errors when comparing DFT energies for systems with similar electronic structures. The influence of zero-point energies, polaritons, and vibrational excitations on the results has been examined and found to be minor.

  20. Computational approaches to energy materials

    Catlow, Richard; Walsh, Aron

    2013-01-01

    The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process.   Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the

  1. Development of highly accurate approximate scheme for computing the charge transfer integral

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature

  2. Development of highly accurate approximate scheme for computing the charge transfer integral.

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117

  3. Accurate 3-D finite difference computation of traveltimes in strongly heterogeneous media

    Noble, M.; Gesret, A.; Belayouni, N.

    2014-12-01

    Seismic traveltimes and their spatial derivatives are the basis of many imaging methods such as pre-stack depth migration and tomography. A common approach to compute these quantities is to solve the eikonal equation with a finite-difference scheme. If many recently published algorithms for resolving the eikonal equation do now yield fairly accurate traveltimes for most applications, the spatial derivatives of traveltimes remain very approximate. To address this accuracy issue, we develop a new hybrid eikonal solver that combines a spherical approximation when close to the source and a plane wave approximation when far away. This algorithm reproduces properly the spherical behaviour of wave fronts in the vicinity of the source. We implement a combination of 16 local operators that enables us to handle velocity models with sharp vertical and horizontal velocity contrasts. We associate to these local operators a global fast sweeping method to take into account all possible directions of wave propagation. Our formulation allows us to introduce a variable grid spacing in all three directions of space. We demonstrate the efficiency of this algorithm in terms of computational time and the gain in accuracy of the computed traveltimes and their derivatives on several numerical examples.

  4. Proposed Enhanced Object Recognition Approach for Accurate Bionic Eyes

    Mohammad Shkoukani

    2012-07-01

    Full Text Available AI has played a huge role in image formation and recognition, but all built on the supervised and unsupervised learning algorithms the learning agents follow. Neural networks have also a role in bionic eyes integration but it is not discussed thoroughly in this paper. The chip to be implanted, which is a robotic device that applies methods developed in machine learning, consists of large scale algorithms for feature learning to construct classifiers for object detection and recognition, to input in the chip system. The challenge however is in identifying a complex image, which may require combined processes of learning features algorithms. In this paper an experimented approaches are stated for individual case of concentration of objects to obtain a high recognition outcome. Each approach may influence one angle, and a suggested non-experimented approach may give a better visual aid for bionic recognition and identification, using more learning and testing methods. The paper discusses the different approached of kernel and convolutional methods to classify objects, in addition to a proposed model to extract a maximized optimization of object formation and recognition. The proposed model combines variety of algorithms that have been experimented in differed related works and uses different learning approaches to handle large datasets in training.

  5. A computational methodology for formulating gasoline surrogate fuels with accurate physical and chemical kinetic properties

    Ahmed, Ahfaz

    2015-03-01

    Gasoline is the most widely used fuel for light duty automobile transportation, but its molecular complexity makes it intractable to experimentally and computationally study the fundamental combustion properties. Therefore, surrogate fuels with a simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating surrogates for FACE (fuels for advanced combustion engines) gasolines A and C by combining regression modeling with physical and chemical kinetics simulations. The computational methodology integrates simulation tools executed across different software platforms. Initially, the palette of surrogate species and carbon types for the target fuels were determined from a detailed hydrocarbon analysis (DHA). A regression algorithm implemented in MATLAB was linked to REFPROP for simulation of distillation curves and calculation of physical properties of surrogate compositions. The MATLAB code generates a surrogate composition at each iteration, which is then used to automatically generate CHEMKIN input files that are submitted to homogeneous batch reactor simulations for prediction of research octane number (RON). The regression algorithm determines the optimal surrogate composition to match the fuel properties of FACE A and C gasoline, specifically hydrogen/carbon (H/C) ratio, density, distillation characteristics, carbon types, and RON. The optimal surrogate fuel compositions obtained using the present computational approach was compared to the real fuel properties, as well as with surrogate compositions available in the literature. Experiments were conducted within a Cooperative Fuels Research (CFR) engine operating under controlled autoignition (CAI) mode to compare the formulated surrogates against the real fuels. Carbon monoxide measurements indicated that the proposed surrogates

  6. Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights

    Hale, Nicholas

    2013-03-06

    An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton\\'s root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.

  7. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC

    2009-06-19

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.

  8. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  9. GRID COMPUTING AND CHECKPOINT APPROACH

    Pankaj gupta

    2011-05-01

    Full Text Available Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation or problem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. In this paper we investigate the different techniques of fault tolerance which are used in many real time distributed systems. The main focus is on types of fault occurring in the system, fault detection techniques and the recovery techniques used. A fault can occur due to link failure, resource failure or by any other reason is to be tolerated for working the system smoothly and accurately. These faults can be detected and recovered by many techniques used accordingly. An appropriate fault detector can avoid loss due to system crash and reliable fault tolerance technique can save from system failure. This paper provides how these methods are applied to detect and tolerate faults from various Real Time Distributed Systems. The advantages of utilizing the check pointing functionality are obvious; however so far the Grid community has notdeveloped a widely accepted standard that would allow the Gridenvironment to consciously utilize low level check pointing packages.Therefore, such a standard named Grid Check pointing Architecture isbeing designed. The fault tolerance mechanism used here sets the jobcheckpoints based on the resource failure rate. If resource failureoccurs, the job is restarted from its last successful state using acheckpoint file from another grid resource. A critical aspect for anautomatic recovery is the availability of checkpoint files. A strategy to increase the availability of checkpoints is replication. Grid is a form distributed computing mainly to virtualizes and utilize geographically distributed idle resources. A grid is a distributed computational and storage environment often composed of

  10. Computational Approaches to Vestibular Research

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  11. Computer Networks A Systems Approach

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  12. Efficient and accurate P-value computation for Position Weight Matrices

    Varré Jean-Stéphane

    2007-12-01

    Full Text Available Abstract Background Position Weight Matrices (PWMs are probabilistic representations of signals in sequences. They are widely used to model approximate patterns in DNA or in protein sequences. The usage of PWMs needs as a prerequisite to knowing the statistical significance of a word according to its score. This is done by defining the P-value of a score, which is the probability that the background model can achieve a score larger than or equal to the observed value. This gives rise to the following problem: Given a P-value, find the corresponding score threshold. Existing methods rely on dynamic programming or probability generating functions. For many examples of PWMs, they fail to give accurate results in a reasonable amount of time. Results The contribution of this paper is two fold. First, we study the theoretical complexity of the problem, and we prove that it is NP-hard. Then, we describe a novel algorithm that solves the P-value problem efficiently. The main idea is to use a series of discretized score distributions that improves the final result step by step until some convergence criterion is met. Moreover, the algorithm is capable of calculating the exact P-value without any error, even for matrices with non-integer coefficient values. The same approach is also used to devise an accurate algorithm for the reverse problem: finding the P-value for a given score. Both methods are implemented in a software called TFM-PVALUE, that is freely available. Conclusion We have tested TFM-PVALUE on a large set of PWMs representing transcription factor binding sites. Experimental results show that it achieves better performance in terms of computational time and precision than existing tools.

  13. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design. To develop a physical model for extremely scaled DG MOSFETs, the drain current in the channel must be accurately determined under the application of drain and gate voltages. However, modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent, quantum computations, ...). Therefore, new methods and techniques are required to overcome these constraints. In this paper, a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs. The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design. The approach is general and thus is suitable for any type of nanoscale structure investigation problems in the nanotechnology industry. (semiconductor devices)

  14. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    F. Djeffal; A. Ferdi; M. Chahdi

    2012-01-01

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design.To develop a physical model for extremely scaled DG MOSFETs,the drain current in the channel must be accurately determined under the application of drain and gate voltages.However,modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent,quantum computations ).Therefore,new methods and techniques are required to overcome these constraints.In this paper,a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs.The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design.The approach is general and thus is suitable for any type ofnanoscale structure investigation problems in the nanotechnology industry.

  15. Can computer simulators accurately represent the pathophysiology of individual COPD patients?

    Wang, Wenfei; Das, Anup; Ali, Tayyba; Cole, Oanna; Chikhani, Marc; Haque, Mainul; Hardman, Jonathan G; Bates, Declan G

    2014-01-01

    Background Computer simulation models could play a key role in developing novel therapeutic strategies for patients with chronic obstructive pulmonary disease (COPD) if they can be shown to accurately represent the pathophysiological characteristics of individual patients. Methods We evaluated the capability of a computational simulator to reproduce the heterogeneous effects of COPD on alveolar mechanics as captured in a number of different patient datasets. Results Our results show that accu...

  16. Petascale self-consistent electromagnetic computations using scalable and accurate algorithms for complex structures

    As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly

  17. Aeroacoustic Flow Phenomena Accurately Captured by New Computational Fluid Dynamics Method

    Blech, Richard A.

    2002-01-01

    One of the challenges in the computational fluid dynamics area is the accurate calculation of aeroacoustic phenomena, especially in the presence of shock waves. One such phenomenon is "transonic resonance," where an unsteady shock wave at the throat of a convergent-divergent nozzle results in the emission of acoustic tones. The space-time Conservation-Element and Solution-Element (CE/SE) method developed at the NASA Glenn Research Center can faithfully capture the shock waves, their unsteady motion, and the generated acoustic tones. The CE/SE method is a revolutionary new approach to the numerical modeling of physical phenomena where features with steep gradients (e.g., shock waves, phase transition, etc.) must coexist with those having weaker variations. The CE/SE method does not require the complex interpolation procedures (that allow for the possibility of a shock between grid cells) used by many other methods to transfer information between grid cells. These interpolation procedures can add too much numerical dissipation to the solution process. Thus, while shocks are resolved, weaker waves, such as acoustic waves, are washed out.

  18. A fast and accurate method for computing the Sunyaev-Zeldovich signal of hot galaxy clusters

    Chluba, Jens; Sazonov, Sergey; Nelson, Kaylea

    2012-01-01

    New generation ground and space-based CMB experiments have ushered in discoveries of massive galaxy clusters via the Sunyaev-Zeldovich (SZ) effect, providing a new window for studying cluster astrophysics and cosmology. Many of the newly discovered, SZ-selected clusters contain hot intracluster plasma (kTe > 10 keV) and exhibit disturbed morphology, indicative of frequent mergers with large peculiar velocity (v > 1000 km s^{-1}). It is well-known that for the interpretation of the SZ signal from hot, moving galaxy clusters, relativistic corrections must be taken into account, and in this work, we present a fast and accurate method for computing these effects. Our approach is based on an alternative derivation of the Boltzmann collision term which provides new physical insight into the sources of different kinematic corrections in the scattering problem. By explicitly imposing Lorentz-invariance of the scattering optical depth, we also show that the kinematic corrections to the SZ intensity signal found in thi...

  19. Computational Approaches to Vestibular Research

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  20. Computational Approaches to Vestibular Research

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  1. Computational Approaches to Vestibular Research

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  2. GRID COMPUTING AND CHECKPOINT APPROACH

    Pankaj gupta

    2011-01-01

    Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation or problem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. In this paper we investigate the different techniques of fault tolerance which are used in many real time distributed systems. The main focus is on types of fault occu...

  3. Fuzzy multiple linear regression: A computational approach

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  4. Computational approach to Riemann surfaces

    Klein, Christian

    2011-01-01

    This volume offers a well-structured overview of existent computational approaches to Riemann surfaces and those currently in development. The authors of the contributions represent the groups providing publically available numerical codes in this field. Thus this volume illustrates which software tools are available and how they can be used in practice. In addition examples for solutions to partial differential equations and in surface theory are presented. The intended audience of this book is twofold. It can be used as a textbook for a graduate course in numerics of Riemann surfaces, in which case the standard undergraduate background, i.e., calculus and linear algebra, is required. In particular, no knowledge of the theory of Riemann surfaces is expected; the necessary background in this theory is contained in the Introduction chapter. At the same time, this book is also intended for specialists in geometry and mathematical physics applying the theory of Riemann surfaces in their research. It is the first...

  5. Computer-based personality judgments are more accurate than those made by humans.

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  6. Accurate measurement of surface areas of anatomical structures by computer-assisted triangulation of computed tomography images

    There is a need for accurate surface area measurement of internal anatomical structures in order to define light dosimetry in adjunctive intraoperative photodynamic therapy (AIOPDT). The authors investigated whether computer-assisted triangulation of serial sections generated by computed tomography (CT) scanning can give an accurate assessment of the surface area of the walls of the true pelvis after anterior resection and before colorectal anastomosis. They show that the technique of paper density tessellation is an acceptable method of measuring the surface areas of phantom objects, with a maximum error of 0.5%, and is used as the gold standard. Computer-assisted triangulation of CT images of standard geometric objects and accurately-constructed pelvic phantoms gives a surface area assessment with a maximum error of 2.5% compared with the gold standard. The CT images of 20 patients' pelves have been analysed by computer-assisted triangulation and this shows the surface area of the walls varies from 143 cm2 to 392 cm2. (Author)

  7. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants. PMID:24216719

  8. Performance assessment of a fast and accurate scalar optical diffraction field computation algorithm

    Bora Esmer, G.

    2013-03-01

    Dynamic holographic reconstructions can be obtained by employing digital holographic video displays which are pixelated devices. In practice, spatial light modulators (SLMs) are used in such purposes. The pixelated structure of SLMs can affect quality of the reconstructed objects. Hence, to obtain better reconstructions, pixelated structure of SLMs has to be taken into consideration. Rapid calculation of the diffraction field which is emitted by the object is just as important as the accuracy of the diffraction field. The presented algorithm is based on computation of Fresnel integral over each pixel area on SLM, thus accurate results are attained. Fast computation of the diffraction field is obtained by scaling of a pre-computed diffraction field compared to standard way of computing the diffraction field. Scaling operation is obtained by using three interpolation methods: rounding to nearest value, linear and cubic interpolation. Although, the rounding to nearest value gives the shortest computation time among all three interpolation methods, it provides the largest normalized mean square error (NMSE). The smallest NMSE can be attained when cubic interpolation is used for computation of diffraction field, but it yields the longest the computation time. Even if NMSE performance of the linear interpolation method is not as good as the cubic interpolation method, the computation time can be reduced by nearly half. [Figure not available: see fulltext.

  9. 3D Navier-Stokes Time Accurate Solutions Using Multipartitioning Parallel Computation Methodology

    Zha, Ge-Cheng

    1998-01-01

    A parallel CFD code solving 3D time accurate Navier-Stokes equations with multipartitioning parallel Methodology is being developed in collaboration with Ohio State University within the Air Vehicle Directorate, at Wright Patterson Air Force Base. The advantage of the multipartitioning parallel method is that the domain decomposition will not introduce domain boundaries for the implicit operators. A ring structure data communication is employed so that the implicit time accurate method can be implemented for multi-processors with the same accuracy as for the single processor. No sub-iteration is needed at the domain boundaries. The code has been validated for some typical unsteady flows, which include Coutte Flow, flow passing a cylinder. The code now is being employed for a large scale time accurate wall jet transient flow computation. 'ne preliminary results are promising. The mesh has been refined to capture more details of the flow field. The mesh refinement computation is in progress and would be difficult to successfully implement without the parallel computation techniques used. A modified version of the code with more efficient inversion of the diagonalized block matrix is currently being tested.

  10. Accurate computation of Stokes flow driven by an open immersed interface

    Li, Yi; Layton, Anita T.

    2012-06-01

    We present numerical methods for computing two-dimensional Stokes flow driven by forces singularly supported along an open, immersed interface. Two second-order accurate methods are developed: one for accurately evaluating boundary integral solutions at a point, and another for computing Stokes solution values on a rectangular mesh. We first describe a method for computing singular or nearly singular integrals, such as a double layer potential due to sources on a curve in the plane, evaluated at a point on or near the curve. To improve accuracy of the numerical quadrature, we add corrections for the errors arising from discretization, which are found by asymptotic analysis. When used to solve the Stokes equations with sources on an open, immersed interface, the method generates second-order approximations, for both the pressure and the velocity, and preserves the jumps in the solutions and their derivatives across the boundary. We then combine the method with a mesh-based solver to yield a hybrid method for computing Stokes solutions at N2 grid points on a rectangular grid. Numerical results are presented which exhibit second-order accuracy. To demonstrate the applicability of the method, we use the method to simulate fluid dynamics induced by the beating motion of a cilium. The method preserves the sharp jumps in the Stokes solution and their derivatives across the immersed boundary. Model results illustrate the distinct hydrodynamic effects generated by the effective stroke and by the recovery stroke of the ciliary beat cycle.

  11. Computer Architecture A Quantitative Approach

    Hennessy, John L

    2011-01-01

    The computing world today is in the middle of a revolution: mobile clients and cloud computing have emerged as the dominant paradigms driving programming and hardware innovation today. The Fifth Edition of Computer Architecture focuses on this dramatic shift, exploring the ways in which software and technology in the cloud are accessed by cell phones, tablets, laptops, and other mobile computing devices. Each chapter includes two real-world examples, one mobile and one datacenter, to illustrate this revolutionary change.Updated to cover the mobile computing revolutionEmphasizes the two most im

  12. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  13. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  14. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ∼40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry. (paper)

  15. Palm computer demonstrates a fast and accurate means of burn data collection.

    Lal, S O; Smith, F W; Davis, J P; Castro, H Y; Smith, D W; Chinkes, D L; Barrow, R E

    2000-01-01

    Manual biomedical data collection and entry of the data into a personal computer is time-consuming and can be prone to errors. The purpose of this study was to compare data entry into a hand-held computer versus hand written data followed by entry of the data into a personal computer. A Palm (3Com Palm IIIx, Santa, Clara, Calif) computer with a custom menu-driven program was used for the entry and retrieval of burn-related variables. These variables were also used to create an identical sheet that was filled in by hand. Identical data were retrieved twice from 110 charts 48 hours apart and then used to create an Excel (Microsoft, Redmond, Wash) spreadsheet. One time data were recorded by the Palm entry method, and the other time the data were handwritten. The method of retrieval was alternated between the Palm system and handwritten system every 10 charts. The total time required to log data and to generate an Excel spreadsheet was recorded and used as a study endpoint. The total time for the Palm method of data collection and downloading to a personal computer was 23% faster than hand recording with the personal computer entry method (P errors were generated with the Palm method.) The Palm is a faster and more accurate means of data collection than a handwritten technique. PMID:11194811

  16. Fast and Accurate Computation of Time-Domain Acoustic Scattering Problems with Exact Nonreflecting Boundary Conditions

    Wang, Li-Lian; Zhao, Xiaodan

    2011-01-01

    This paper is concerned with fast and accurate computation of exterior wave equations truncated via exact circular or spherical nonreflecting boundary conditions (NRBCs, which are known to be nonlocal in both time and space). We first derive analytic expressions for the underlying convolution kernels, which allow for a rapid and accurate evaluation of the convolution with $O(N_t)$ operations over $N_t$ successive time steps. To handle the onlocality in space, we introduce the notion of boundary perturbation, which enables us to handle general bounded scatters by solving a sequence of wave equations in a regular domain. We propose an efficient spectral-Galerkin solver with Newmark's time integration for the truncated wave equation in the regular domain. We also provide ample numerical results to show high-order accuracy of NRBCs and efficiency of the proposed scheme.

  17. Machine Computation; An Algorithmic Approach.

    Gonzalez, Richard F.; McMillan, Claude, Jr.

    Designed for undergraduate social science students, this textbook concentrates on using the computer in a straightforward way to manipulate numbers and variables in order to solve problems. The text is problem oriented and assumes that the student has had little prior experience with either a computer or programing languages. An introduction to…

  18. A novel approach for latent print identification using accurate overlays to prioritize reference prints.

    Gantz, Daniel T; Gantz, Donald T; Walch, Mark A; Roberts, Maria Antonia; Buscaglia, JoAnn

    2014-12-01

    A novel approach to automated fingerprint matching and scoring that produces accurate locally and nonlinearly adjusted overlays of a latent print onto each reference print in a corpus is described. The technology, which addresses challenges inherent to latent prints, provides the latent print examiner with a prioritized ranking of candidate reference prints based on the overlays of the latent onto each candidate print. In addition to supporting current latent print comparison practices, this approach can make it possible to return a greater number of AFIS candidate prints because the ranked overlays provide a substantial starting point for latent-to-reference print comparison. To provide the image information required to create an accurate overlay of a latent print onto a reference print, "Ridge-Specific Markers" (RSMs), which correspond to short continuous segments of a ridge or furrow, are introduced. RSMs are reliably associated with any specific local section of a ridge or a furrow using the geometric information available from the image. Latent prints are commonly fragmentary, with reduced clarity and limited minutiae (i.e., ridge endings and bifurcations). Even in the absence of traditional minutiae, latent prints contain very important information in their ridges that permit automated matching using RSMs. No print orientation or information beyond the RSMs is required to generate the overlays. This automated process is applied to the 88 good quality latent prints in the NIST Special Database (SD) 27. Nonlinear overlays of each latent were produced onto all of the 88 reference prints in the NIST SD27. With fully automated processing, the true mate reference prints were ranked in the first candidate position for 80.7% of the latents tested, and 89.8% of the true mate reference prints ranked in the top ten positions. After manual post-processing of those latents for which the true mate reference print was not ranked first, these frequencies increased to 90

  19. Efficient reconfigurable hardware architecture for accurately computing success probability and data complexity of linear attacks

    Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar;

    2012-01-01

    An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...

  20. Toward detailed prominence seismology - I. Computing accurate 2.5D magnetohydrodynamic equilibria

    Blokland, J W S

    2011-01-01

    Context. Prominence seismology exploits our knowledge of the linear eigenoscillations for representative magnetohydro- dynamic models of filaments. To date, highly idealized models for prominences have been used, especially with respect to the overall magnetic configurations. Aims. We initiate a more systematic survey of filament wave modes, where we consider full multi-dimensional models with twisted magnetic fields representative of the surrounding magnetic flux rope. This requires the ability to compute accurate 2.5 dimensional magnetohydrodynamic equilibria that balance Lorentz forces, gravity, and pressure gradients, while containing density enhancements (static or in motion). Methods. The governing extended Grad-Shafranov equation is discussed, along with an analytic prediction for circular flux ropes for the Shafranov shift of the central magnetic axis due to gravity. Numerical equilibria are computed with a finite element-based code, demonstrating fourth order accuracy on an explicitly known, non-triv...

  1. A Novel MoM Approach for Obtaining Accurate and Efficient Solutions in Optical Rib Waveguide

    YENER, Namık

    2002-01-01

    The optical rib waveguide (ORW) plays an important role in the design of several integrated optical devices. Various methods have been proposed for obtaining the modal field solutions in ORW. However, to the best of our knowledge none of them is capable of providing accurate full-wave benchmark solutions. Here we present a novel MoM approach wherein the modes of a loaded rectangular waveguide are utilized as basis functions and demonstrate that this approach is very efficient and yie...

  2. An institutional approach to computational social creativity

    Corneli, Joseph

    2016-01-01

    Elinor Ostrom's Nobel Memorial Prize-winning work on "the analysis of economic governance, especially the commons" scaffolds an argument for an institutional approach to computational social creativity. Several Ostrom-inspired "creativity design principles" are explored and exemplified to illustrate the computational and institutional structures that are employed in current and potential computational creativity practice.

  3. Soft Computing Approaches To Fault Tolerant Systems

    Neeraj Prakash Srivastava

    2014-05-01

    Full Text Available We present in this paper as an introduction to soft computing techniques for fault tolerant systems and the terminology with different ways of achieving fault tolerance. The paper focuses on the problem of fault tolerance using soft computing techniques. The fundamentals of soft computing approaches and its type with introduction of fault tolerance are discussed. The main objective is to show how to implement soft computing approaches for fault detection, isolation and identification. The paper contains details about soft computing application with an application of wireless sensor network as fault tolerant system.

  4. Antenna arrays a computational approach

    Haupt, Randy L

    2010-01-01

    This book covers a wide range of antenna array topics that are becoming increasingly important in wireless applications, particularly in design and computer modeling. Signal processing and numerical modeling algorithms are explored, and MATLAB computer codes are provided for many of the design examples. Pictures of antenna arrays and components provided by industry and government sources are presented with explanations of how they work. Antenna Arrays is a valuable reference for practicing engineers and scientists in wireless communications, radar, and remote sensing, and an excellent textbook for advanced antenna courses.

  5. Ontological Approach toward Cybersecurity in Cloud Computing

    Takahashi, Takeshi; Kadobayashi, Youki; FUJIWARA, HIROYUKI

    2014-01-01

    Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an...

  6. A best-estimate plus uncertainty type analysis for computing accurate critical channel power uncertainties

    This paper provides a Critical Channel Power (CCP) uncertainty analysis methodology based on a Monte-Carlo approach. This Monte-Carlo method includes the identification of the sources of uncertainty and the development of error models for the characterization of epistemic and aleatory uncertainties associated with the CCP parameter. Furthermore, the proposed method facilitates a means to use actual operational data leading to improvements over traditional methods (e.g., sensitivity analysis) which assume parametric models that may not accurately capture the possible complex statistical structures in the system input and responses. (author)

  7. Simple but accurate GCM-free approach for quantifying anthropogenic climate change

    Lovejoy, S.

    2014-12-01

    We are so used to analysing the climate with the help of giant computer models (GCM's) that it is easy to get the impression that they are indispensable. Yet anthropogenic warming is so large (roughly 0.9oC) that it turns out that it is straightforward to quantify it with more empirically based methodologies that can be readily understood by the layperson. The key is to use the CO2 forcing as a linear surrogate for all the anthropogenic effects from 1880 to the present (implicitly including all effects due to Greenhouse Gases, aerosols and land use changes). To a good approximation, double the economic activity, double the effects. The relationship between the forcing and global mean temperature is extremely linear as can be seen graphically and understood without fancy statistics, [Lovejoy, 2014a] (see the attached figure and http://www.physics.mcgill.ca/~gang/Lovejoy.htm). To an excellent approximation, the deviations from the linear forcing - temperature relation can be interpreted as the natural variability. For example, this direct - yet accurate approach makes it graphically obvious that the "pause" or "hiatus" in the warming since 1998 is simply a natural cooling event that has roughly offset the anthropogenic warming [Lovejoy, 2014b]. Rather than trying to prove that the warming is anthropogenic, with a little extra work (and some nonlinear geophysics theory and pre-industrial multiproxies) we can disprove the competing theory that it is natural. This approach leads to the estimate that the probability of the industrial scale warming being a giant natural fluctuation is ≈0.1%: it can be dismissed. This destroys the last climate skeptic argument - that the models are wrong and the warming is natural. It finally allows for a closure of the debate. In this talk we argue that this new, direct, simple, intuitive approach provides an indispensable tool for communicating - and convincing - the public of both the reality and the amplitude of anthropogenic warming

  8. Infinitesimal symmetries: a computational approach

    This thesis is concerned with computational aspects in the determination of infinitesimal symmetries and Lie-Baecklund transformations of differential equations. Moreover some problems are calculated explicitly. A brief introduction to some concepts in the theory of symmetries and Lie-Baecklund transformations, relevant for this thesis, are given. The mathematical formalism is shortly reviewed. The jet bundle formulation is chosen, in which, by its algebraic nature, objects can be described very precisely. Consequently it is appropriate for implementation. A number of procedures are discussed, which enable to carry through computations with the help of a computer. These computations are very extensive in practice. The Lie algebras of infinitesimal symmetries of a number of differential equations in Mathematical Physics are established and some of their applications are discussed, i.e., Maxwell equations, nonlinear diffusion equation, nonlinear Schroedinger equation, nonlinear Dirac equations and self dual SU(2) Yang-Mills equations. Lie-Baecklund transformations of Burgers' equation, Classical Boussinesq equation and the Massive Thirring Model are determined. Furthermore, nonlocal Lie-Baecklund transformations of the last equation are derived. (orig.)

  9. [The determinant role of an accurate medicosocial approach in the prognosis of pediatric blood diseases].

    Toppet, M

    2005-01-01

    The care of infancy and childhood blood diseases implies a comprehensive medicosocial approach. This is a prerequisite for regular follow-up, for satisfactory compliance to treatment and for optimal patient's quality of life. Different modalities of medicosocial approach have been developed in the pediatric department (firstly in the Hospital Saint Pierre and than in the Children's University Hospital HUDERF). The drastic importance of a recent reform of the increased family allowances is briefly presented. The author underlines the determinant role of an accurate global approach, in which the patient and the family are surrounded by a multidisciplinary team, including social workers. PMID:16454232

  10. COMPUTATIONAL APPROACH TO ORGANIZATIONAL DESIGN

    Alexander Arenas; Roger Guimera; Joan R. Alabart; Hans-Joerg Witt; Albert Diaz-Guilera

    2000-01-01

    The idea of the work is to propose an abstract and simple enough agent-based model for company dynamics, in order to be able to deal computationally and even analytically with the problem of organizational design. Nevertheless, the model should be able to reproduce the essential characteristics of real organizations.The natural way of modeling a company is as being a network where the nodes represent employees and the links between them represent communication lines. In our model, problems ar...

  11. Molecules-in-Molecules: An Extrapolated Fragment-Based Approach for Accurate Calculations on Large Molecules and Materials.

    Mayhall, Nicholas J; Raghavachari, Krishnan

    2011-05-10

    We present a new extrapolated fragment-based approach, termed molecules-in-molecules (MIM), for accurate energy calculations on large molecules. In this method, we use a multilevel partitioning approach coupled with electronic structure studies at multiple levels of theory to provide a hierarchical strategy for systematically improving the computed results. In particular, we use a generalized hybrid energy expression, similar in spirit to that in the popular ONIOM methodology, that can be combined easily with any fragmentation procedure. In the current work, we explore a MIM scheme which first partitions a molecule into nonoverlapping fragments and then recombines the interacting fragments to form overlapping subsystems. By including all interactions with a cheaper level of theory, the MIM approach is shown to significantly reduce the errors arising from a single level fragmentation procedure. We report the implementation of energies and gradients and the initial assessment of the MIM method using both biological and materials systems as test cases. PMID:26610128

  12. Immune based computer virus detection approaches

    TAN Ying; ZHANG Pengtao

    2013-01-01

    The computer virus is considered one of the most horrifying threats to the security of computer systems worldwide.The rapid development of evasion techniques used in virus causes the signature based computer virus detection techniques to be ineffective.Many novel computer virus detection approaches have been proposed in the past to cope with the ineffectiveness,mainly classified into three categories:static,dynamic and heuristics techniques.As the natural similarities between the biological immune system (BIS),computer security system (CSS),and the artificial immune system (AIS) were all developed as a new prototype in the community of anti-virus research.The immune mechanisms in the BIS provide the opportunities to construct computer virus detection models that are robust and adaptive with the ability to detect unseen viruses.In this paper,a variety of classic computer virus detection approaches were introduced and reviewed based on the background knowledge of the computer virus history.Next,a variety of immune based computer virus detection approaches were also discussed in detail.Promising experimental results suggest that the immune based computer virus detection approaches were able to detect new variants and unseen viruses at lower false positive rates,which have paved a new way for the anti-virus research.

  13. Computer Architecture A Quantitative Approach

    Hennessy, John L

    2007-01-01

    The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelis

  14. Accurate definition of brain regions position through the functional landmark approach.

    Thirion, Bertrand; Varoquaux, Gaël; Poline, Jean-Baptiste

    2010-01-01

    In many application of functional Magnetic Resonance Imaging (fMRI), including clinical or pharmacological studies, the definition of the location of the functional activity between subjects is crucial. While current acquisition and normalization procedures improve the accuracy of the functional signal localization, it is also important to ensure that functional foci detection yields accurate results, and reflects between-subject variability. Here we introduce a fast functional landmark detection procedure, that explicitly models the spatial variability of activation foci in the observed population. We compare this detection approach to standard statistical maps peak extraction procedures: we show that it yields more accurate results on simulations, and more reproducible results on a large cohort of subjects. These results demonstrate that explicit functional landmark modeling approaches are more effective than standard statistical mapping for brain functional focus detection. PMID:20879321

  15. Learning and geometry computational approaches

    Smith, Carl

    1996-01-01

    The field of computational learning theory arose out of the desire to for­ mally understand the process of learning. As potential applications to artificial intelligence became apparent, the new field grew rapidly. The learning of geo­ metric objects became a natural area of study. The possibility of using learning techniques to compensate for unsolvability provided an attraction for individ­ uals with an immediate need to solve such difficult problems. Researchers at the Center for Night Vision were interested in solving the problem of interpreting data produced by a variety of sensors. Current vision techniques, which have a strong geometric component, can be used to extract features. However, these techniques fall short of useful recognition of the sensed objects. One potential solution is to incorporate learning techniques into the geometric manipulation of sensor data. As a first step toward realizing such a solution, the Systems Research Center at the University of Maryland, in conjunction with the C...

  16. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    Li, Xiangrui; Lu, Zhong-Lin

    2012-01-01

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  17. Cloud computing methods and practical approaches

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  18. Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number

    Wang Xingyuan

    2010-01-01

    Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.

  19. Simple and Accurate Computations of Solvatochromic Shifts in pi -> pi* Transitions of Aromatic Chromophors

    Heinz, H; Leontidis, E; Heinz, Hendrik; Suter, Ulrich W.; Leontidis, Epameinondas

    2001-01-01

    A new approach is introduced for calculating the spectral shifts of the most bathochromic pi -> pi* transition of an aromatic chromophore in apolar environments. As an example, perylene in solid and liquid n-alkane matrices was chosen, and all shifts are calculated relative to one well-defined solid-inclusion system. It is shown that a simple two-level treatment of the solute using Huckel theory yields spectral shifts in excellent agreement with experimental results for the most prominent inclusion sites of perylene in solid n-alkane surroundings and for the dilute solutions in liquid n-alkanes. The idea is general enough to be applied to any aromatic chromophore in a nonpolar solvent matrix. In contrast to earlier treatments, this approach is based on geometry-dependent polarizabilities, employs a r^-4 dependence for the dispersion energy, is conceptually simple and computationally efficient. Different simple models based on our general approach to compute the UV spectral shifts due to solvation indicate tha...

  20. A constrained variable projection reconstruction method for photoacoustic computed tomography without accurate knowledge of transducer responses

    Sheng, Qiwei; Matthews, Thomas P; Xia, Jun; Zhu, Liren; Wang, Lihong V; Anastasio, Mark A

    2015-01-01

    Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. When the imaging system employs conventional piezoelectric ultrasonic transducers, the ideal photoacoustic (PA) signals are degraded by the transducers' acousto-electric impulse responses (EIRs) during the measurement process. If unaccounted for, this can degrade the accuracy of the reconstructed image. In principle, the effect of the EIRs on the measured PA signals can be ameliorated via deconvolution; images can be reconstructed subsequently by application of a reconstruction method that assumes an idealized EIR. Alternatively, the effect of the EIR can be incorporated into an imaging model and implicitly compensated for during reconstruction. In either case, the efficacy of the correction can be limited by errors in the assumed EIRs. In this work, a joint optimization approach to PACT image r...

  1. Accurate Computation of Reduction Potentials of 4Fe−4S Clusters Indicates a Carboxylate Shift in Pyrococcus furiosus Ferredoxin

    Kepp, Kasper Planeta; Ooi, Bee Lean; Christensen, Hans Erik Mølager

    2007-01-01

    This work describes the computation and accurate reproduction of subtle shifts in reduction potentials for two mutants of the iron-sulfur protein Pyrococcus furiosus ferredoxin. The computational models involved only first-sphere ligands and differed with respect to one ligand, either acetate (as...

  2. On the Fourier expansion method for highly accurate computation of the Voigt/complex error function in a rapid algorithm

    Abrarov, S M

    2012-01-01

    In our recent publication [1] we presented an exponential series approximation suitable for highly accurate computation of the complex error function in a rapid algorithm. In this Short Communication we describe how a simplified representation of the proposed complex error function approximation makes possible further algorithmic optimization resulting in a considerable computational acceleration without compromise on accuracy.

  3. GRID COMPUTING AND FAULT TOLERANCE APPROACH

    Pankaj Gupta,

    2011-10-01

    Full Text Available Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation orproblem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. This paper proposes a method to achieve maximum fault tolerance in the Grid environment system by using Reliability consideration by using Replication approach and Check-point approach. Fault tolerance is an important property for large scale computational grid systems, where geographically distributed nodes co-operate to execute a task. In order to achieve high level of reliability and availability, the grid infrastructure should be a foolproof fault tolerant. Since the failure of resources affects job execution fatally, fault tolerance service is essential to satisfy QOS requirement in grid computing. Commonly utilized techniques for providing fault tolerance are job check pointing and replication. Both techniques mitigate the amount of work lost due to changing system availability but can introduce significant runtime overhead. The latter largely depends on the length of check pointing interval and the chosen number of replicas, respectively. In case of complex scientific workflows where tasks can execute in well defined order reliability is another biggest challenge because of the unreliable nature of the grid resources.

  4. Toward exascale computing through neuromorphic approaches.

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  5. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm3) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm3, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm3, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm, and 1

  6. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  7. Parallel Computation in Econometrics: A Simplified Approach

    Jurgen A. Doornik; Shephard, Neil; Hendry, David F.

    2004-01-01

    Parallel computation has a long history in econometric computing, but is not at all wide spread. We believe that a major impediment is the labour cost of coding for parallel architectures. Moreover, programs for specific hardware often become obsolete quite quickly. Our approach is to take a popular matrix programming language (Ox), and implement a message-passing interface using MPI. Next, object-oriented programming allows us to hide the specific parallelization code, so that a program does...

  8. A Big Data Approach to Computational Creativity

    Varshney, Lav R; Varshney, Kush R; Bhattacharjya, Debarun; Schoergendorfer, Angela; Chee, Yi-Min

    2013-01-01

    Computational creativity is an emerging branch of artificial intelligence that places computers in the center of the creative process. Broadly, creativity involves a generative step to produce many ideas and a selective step to determine the ones that are the best. Many previous attempts at computational creativity, however, have not been able to achieve a valid selective step. This work shows how bringing data sources from the creative domain and from hedonic psychophysics together with big data analytics techniques can overcome this shortcoming to yield a system that can produce novel and high-quality creative artifacts. Our data-driven approach is demonstrated through a computational creativity system for culinary recipes and menus we developed and deployed, which can operate either autonomously or semi-autonomously with human interaction. We also comment on the volume, velocity, variety, and veracity of data in computational creativity.

  9. A novel fast and accurate pseudo-analytical simulation approach for MOAO

    Gendron, É.

    2014-08-04

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is

  10. Unilateral hyperlucency of the lung: a systematic approach to accurate radiographic interpretation

    Noh, Hyung Jun; Oh, Yu Whan; Choi, Eun Jung; Seo, Bo Kyung; Cho, Kyu Ran; Kang, Eun Young; Kim, Jung Hyuk [Korea University College of Medicine, Seoul (Korea, Republic of)

    2002-12-01

    The radiographic appearance of a unilateral hyperlucent lung is related to various conditions, the accurate radiographic interpretation of which requires a structured approach as well as an awareness of the spectrum of these entities. Firstly, it is important to determine whether a hyperlucent hemithorax is associated with artifacts resulting from rotation of the patient, grid cutoff, or the heel effect. The second step is to determine whether or not a hyperlucent lung is abnormal. Lung that is in fact normal may appear hyperlucent because of diffusely increased opacity of the opposite hemithorax. Thirdly, thoracic wall and soft tissue abnormalities such as mastectomy of Poland syndrome may cause hyperinflation. Lastly, abnormalities of lung parenchyma may result in hyperlucency. Lung abnormalities and be divided into two groups: a) obstructive or compensatory hyperinflation; and b) reduced vascular perfusion of the lung due to congenital or acquired vascular abnormalities. In this article, we describe and illustrate the imaging spectrum of these causes and outline a structured approach to accurate radiographic interpretation.

  11. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  12. Late enhanced computed tomography in Hypertrophic Cardiomyopathy enables accurate left-ventricular volumetry

    Late enhancement (LE) multi-slice computed tomography (leMDCT) was introduced for the visualization of (intra-) myocardial fibrosis in Hypertrophic Cardiomyopathy (HCM). LE is associated with adverse cardiac events. This analysis focuses on leMDCT derived LV muscle mass (LV-MM) which may be related to LE resulting in LE proportion for potential risk stratification in HCM. N=26 HCM-patients underwent leMDCT (64-slice-CT) and cardiovascular magnetic resonance (CMR). In leMDCT iodine contrast (Iopromid, 350 mg/mL; 150mL) was injected 7 minutes before imaging. Reconstructed short cardiac axis views served for planimetry. The study group was divided into three groups of varying LV-contrast. LeMDCT was correlated with CMR. The mean age was 64.2 ± 14 years. The groups of varying contrast differed in weight and body mass index (p 0.05). In the group with sufficient contrast LV-MM appeared with 172 ± 30.8 g in leMDCT vs. 165.9 ± 37.8 in CMR (p > 0.05). Overall intra-/inter-observer variability of semiautomatic assessment of LV-MM showed an accuracy of 0.9 ± 8.6 g and 0.8 ± 9.2 g in leMDCT. All leMDCT-measures correlated well with CMR (r > 0.9). LeMDCT primarily performed for LE-visualization in HCM allows for accurate LV-volumetry including LV-MM in > 90 % of the cases. (orig.)

  13. Late enhanced computed tomography in Hypertrophic Cardiomyopathy enables accurate left-ventricular volumetry

    Langer, Christoph; Lutz, M.; Kuehl, C.; Frey, N. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany); Partner Site Hamburg/Kiel/Luebeck, DZHK (German Centre for Cardiovascular Research), Kiel (Germany); Both, M.; Sattler, B.; Jansen, O; Schaefer, P. [Christian-Albrechts-Universitaet Kiel, Department of Diagnostic Radiology, University Medical Center Schleswig-Holstein (Germany); Harders, H.; Eden, M. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany)

    2014-10-15

    Late enhancement (LE) multi-slice computed tomography (leMDCT) was introduced for the visualization of (intra-) myocardial fibrosis in Hypertrophic Cardiomyopathy (HCM). LE is associated with adverse cardiac events. This analysis focuses on leMDCT derived LV muscle mass (LV-MM) which may be related to LE resulting in LE proportion for potential risk stratification in HCM. N=26 HCM-patients underwent leMDCT (64-slice-CT) and cardiovascular magnetic resonance (CMR). In leMDCT iodine contrast (Iopromid, 350 mg/mL; 150mL) was injected 7 minutes before imaging. Reconstructed short cardiac axis views served for planimetry. The study group was divided into three groups of varying LV-contrast. LeMDCT was correlated with CMR. The mean age was 64.2 ± 14 years. The groups of varying contrast differed in weight and body mass index (p < 0.05). In the group with good LV-contrast assessment of LV-MM resulted in 147.4 ± 64.8 g in leMDCT vs. 147.1 ± 65.9 in CMR (p > 0.05). In the group with sufficient contrast LV-MM appeared with 172 ± 30.8 g in leMDCT vs. 165.9 ± 37.8 in CMR (p > 0.05). Overall intra-/inter-observer variability of semiautomatic assessment of LV-MM showed an accuracy of 0.9 ± 8.6 g and 0.8 ± 9.2 g in leMDCT. All leMDCT-measures correlated well with CMR (r > 0.9). LeMDCT primarily performed for LE-visualization in HCM allows for accurate LV-volumetry including LV-MM in > 90 % of the cases. (orig.)

  14. Accurate modeling of size and strain broadening in the Rietveld refinement: The {open_quotes}double-Voigt{close_quotes} approach

    Balzar, D. [Ruder Boskovic Inst., Zagreb (Croatia); Ledbetter, H. [National Inst. of Standards and Technology, Boulder, CO (United States)

    1995-12-31

    In the {open_quotes}double-Voigt{close_quotes} approach, an exact Voigt function describes both size- and strain-broadened profiles. The lattice strain is defined in terms of physically credible mean-square strain averaged over a distance in the diffracting domains. Analysis of Fourier coefficients in a harmonic approximation for strain coefficients leads to the Warren-Averbach method for the separation of size and strain contributions to diffraction line broadening. The model is introduced in the Rietveld refinement program in the following way: Line widths are modeled with only four parameters in the isotropic case. Varied parameters are both surface- and volume-weighted domain sizes and root-mean-square strains averaged over two distances. Refined parameters determine the physically broadened Voigt line profile. Instrumental Voigt line profile parameters are added to obtain the observed (Voigt) line profile. To speed computation, the corresponding pseudo-Voigt function is calculated and used as a fitting function in refinement. This approach allows for both fast computer code and accurate modeling in terms of physically identifiable parameters.

  15. Hybrid soft computing approaches research and applications

    Dutta, Paramartha; Chakraborty, Susanta

    2016-01-01

    The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis,  (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.

  16. Computational Chemical Imaging for Cardiovascular Pathology: Chemical Microscopic Imaging Accurately Determines Cardiac Transplant Rejection

    Tiwari, Saumya; Reddy, Vijaya B.; Bhargava, Rohit; Raman, Jaishankar

    2015-01-01

    Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR) spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients’ biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures. PMID:25932912

  17. Computational chemical imaging for cardiovascular pathology: chemical microscopic imaging accurately determines cardiac transplant rejection.

    Saumya Tiwari

    Full Text Available Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients' biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures.

  18. A novel approach to accurate portal dosimetry using CCD-camera based EPIDs

    A new method for portal dosimetry using CCD camera-based electronic portal imaging devices (CEPIDs) is demonstrated. Unlike previous approaches, it is not based on a priori assumptions concerning CEPID cross-talk characteristics. In this method, the nonsymmetrical and position-dependent cross-talk is determined by directly imaging a set of cross-talk kernels generated by small fields ('pencil beams') exploiting the high signal-to-noise ratio of a cooled CCD camera. Signal calibration is achieved by imaging two reference fields. Next, portal dose images (PDIs) can be derived from electronic portal dose images (EPIs), in a fast forward-calculating iterative deconvolution. To test the accuracy of these EPI-based PDIs, a comparison is made to PDIs obtained by scanning diode measurements. The method proved accurate to within 0.2±0.7% (1 SD), for on-axis symmetrical and asymmetrical fields with different field widths and homogeneous phantom thicknesses, off-axis Alderson thorax fields and a strongly modulated IMRT field. Hence, the proposed method allows for fast, accurate portal dosimetry. In addition, it is demonstrated that the CEPID cross-talk signal is not only induced by optical photon reflection and scatter within the CEPID structure, but also by high-energy back-scattered radiation from CEPID elements (mirror and housing) towards the fluorescent screen

  19. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system

  20. Accurate characterization of mask defects by combination of phase retrieval and deterministic approach

    Park, Min-Chul; Leportier, Thibault; Kim, Wooshik; Song, Jindong

    2016-06-01

    In this paper, we present a method to characterize not only shape but also depth of defects in line and space mask patterns. Features in a mask are too fine for conventional imaging system to resolve them and coherent imaging system providing only the pattern diffracted by the mask are used. Then, phase retrieval methods may be applied, but the accuracy it too low to determine the exact shape of the defect. Deterministic methods have been proposed to characterize accurately the defect, but it requires a reference pattern. We propose to use successively phase retrieval algorithm to retrieve the general shape of the mask and then deterministic approach to characterize precisely the defects detected.

  1. Computational Approach for Developing Blood Pump

    Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides an overview of the computational approach to developing a ventricular assist device (VAD) which utilizes NASA aerospace technology. The VAD is used as a temporary support to sick ventricles for those who suffer from late stage congestive heart failure (CHF). The need for donor hearts is much greater than their availability, and the VAD is seen as a bridge-to-transplant. The computational issues confronting the design of a more advanced, reliable VAD include the modelling of viscous incompressible flow. A computational approach provides the possibility of quantifying the flow characteristics, which is especially valuable for analyzing compact design with highly sensitive operating conditions. Computational fluid dynamics (CFD) and rocket engine technology has been applied to modify the design of a VAD which enabled human transplantation. The computing requirement for this project is still large, however, and the unsteady analysis of the entire system from natural heart to aorta involves several hundred revolutions of the impeller. Further study is needed to assess the impact of mechanical VADs on the human body

  2. Efficient and Accurate Computational Framework for Injector Design and Analysis Project

    National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...

  3. Handbook of computational approaches to counterterrorism

    Subrahmanian, VS

    2012-01-01

    Terrorist groups throughout the world have been studied primarily through the use of social science methods. However, major advances in IT during the past decade have led to significant new ways of studying terrorist groups, making forecasts, learning models of their behaviour, and shaping policies about their behaviour. Handbook of Computational Approaches to Counterterrorism provides the first in-depth look at how advanced mathematics and modern computing technology is shaping the study of terrorist groups. This book includes contributions from world experts in the field, and presents extens

  4. Accurate and interpretable nanoSAR models from genetic programming-based decision tree construction approaches.

    Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z

    2016-09-01

    The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models. PMID:26956430

  5. Isogenies of Elliptic Curves: A Computational Approach

    Shumow, Daniel

    2009-01-01

    Isogenies, the mappings of elliptic curves, have become a useful tool in cryptology. These mathematical objects have been proposed for use in computing pairings, constructing hash functions and random number generators, and analyzing the reducibility of the elliptic curve discrete logarithm problem. With such diverse uses, understanding these objects is important for anyone interested in the field of elliptic curve cryptography. This paper, targeted at an audience with a knowledge of the basic theory of elliptic curves, provides an introduction to the necessary theoretical background for understanding what isogenies are and their basic properties. This theoretical background is used to explain some of the basic computational tasks associated with isogenies. Herein, algorithms for computing isogenies are collected and presented with proofs of correctness and complexity analyses. As opposed to the complex analytic approach provided in most texts on the subject, the proofs in this paper are primarily algebraic i...

  6. A Novel PCR-Based Approach for Accurate Identification of Vibrio parahaemolyticus.

    Li, Ruichao; Chiou, Jiachi; Chan, Edward Wai-Chi; Chen, Sheng

    2016-01-01

    A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the bla CARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that bla CARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this bla CARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of bla CARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by bla CARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification. PMID:26858713

  7. An accurate, fast, mathematically robust, universal, non-iterative algorithm for computing multi-component diffusion velocities

    Ambikasaran, Sivaram

    2015-01-01

    Using accurate multi-component diffusion treatment in numerical combustion studies remains formidable due to the computational cost associated with solving for diffusion velocities. To obtain the diffusion velocities, for low density gases, one needs to solve the Stefan-Maxwell equations along with the zero diffusion flux criteria, which scales as $\\mathcal{O}(N^3)$, when solved exactly. In this article, we propose an accurate, fast, direct and robust algorithm to compute multi-component diffusion velocities. To our knowledge, this is the first provably accurate algorithm (the solution can be obtained up to an arbitrary degree of precision) scaling at a computational complexity of $\\mathcal{O}(N)$ in finite precision. The key idea involves leveraging the fact that the matrix of the reciprocal of the binary diffusivities, $V$, is low rank, with its rank being independent of the number of species involved. The low rank representation of matrix $V$ is computed in a fast manner at a computational complexity of $\\...

  8. Advanced computational approaches to biomedical engineering

    Saha, Punam K; Basu, Subhadip

    2014-01-01

    There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig

  9. Computational Approaches to Nucleic Acid Origami.

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms. PMID:26348196

  10. Fast and Accurate Computation Tools for Gravitational Waveforms from Binary Sistems with any Orbital Eccentricity

    Pierro, V; Spallicci, A D; Laserra, E; Recano, F

    2001-01-01

    The relevance of orbital eccentricity in the detection of gravitational radiation from (steady state) binary stars is emphasized. Computationnally effective fast and accurate)tools for constructing gravitational wave templates from binary stars with any orbital eccentricity are introduced, including tight estimation criteria of the pertinent truncation and approximation errors.

  11. Accurate local density photoionization cross sections by LCAO Stieltjes imaging approach

    Photoionization cross sections are evaluated at the local density level avoiding any potential shape approximation by employing the Stieltjes imaging (ST) technique in conjunction with large (STO ) basis LCAO calculations. The ST technique proves accurate in comparison with full continuum calculation on the noble gases ionization. Several choices for the final-state potentials, including different exchange-correlation potentials, are tested on the noble gases, as well as in H2O and CO. Finally, several molecules are investigated employing the transition-state Xα potential and the results compared with available experimental data and previous calculations. The accuracy of the proposed approach appears quite comparable to the ab initio static-exchange level, removing spurious resonances associated to the muffin-tin approximation, and should prove useful in the analysis and interpretation of cross-section data for large systems, where it is shown that basis-set requirements become less stringent. The accuracy obtainable for the cross-section profiles allows close comparison with experimental data, which, although showing good general agreement for the shape of the cross-section profile, indicates the need of a further refinement of the theoretical model. 55 refs., 15 figs., 1 tab

  12. A fourth order accurate finite difference scheme for the computation of elastic waves

    Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.

    1986-01-01

    A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.

  13. An event-driven method to simulate Filippov systems with accurate computing of sliding motions

    Piiroinen, PT; Kuznetsov, YA

    2005-01-01

    This paper describes how to use smooth solvers for simulation of a class of piecewise smooth dynamical systems, called Filippov systems, with discontinuous vector fields. In these systems constrained motion along a discontinuity surface (so-called sliding) is possible and require special treatment numerically. The introduced algorithms are based on an extension to Filippov's method to stabilize the sliding flow together with accurate detection of the entrance and exit of sliding regions. The ...

  14. Introducing Computational Approaches in Intermediate Mechanics

    Cook, David M.

    2006-12-01

    In the winter of 2003, we at Lawrence University moved Lagrangian mechanics and rigid body dynamics from a required sophomore course to an elective junior/senior course, freeing 40% of the time for computational approaches to ordinary differential equations (trajectory problems, the large amplitude pendulum, non-linear dynamics); evaluation of integrals (finding centers of mass and moment of inertia tensors, calculating gravitational potentials for various sources); and finding eigenvalues and eigenvectors of matrices (diagonalizing the moment of inertia tensor, finding principal axes), and to generating graphical displays of computed results. Further, students begin to use LaTeX to prepare some of their submitted problem solutions. Placed in the middle of the sophomore year, this course provides the background that permits faculty members as appropriate to assign computer-based exercises in subsequent courses. Further, students are encouraged to use our Computational Physics Laboratory on their own initiative whenever that use seems appropriate. (Curricular development supported in part by the W. M. Keck Foundation, the National Science Foundation, and Lawrence University.)

  15. Computer Forensics Education - the Open Source Approach

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  16. Predicting microbial interactions through computational approaches.

    Li, Chenhao; Lim, Kun Ming Kenneth; Chng, Kern Rei; Nagarajan, Niranjan

    2016-06-01

    Microorganisms play a vital role in various ecosystems and characterizing interactions between them is an essential step towards understanding the organization and function of microbial communities. Computational prediction has recently become a widely used approach to investigate microbial interactions. We provide a thorough review of emerging computational methods organized by the type of data they employ. We highlight three major challenges in inferring interactions using metagenomic survey data and discuss the underlying assumptions and mathematics of interaction inference algorithms. In addition, we review interaction prediction methods relying on metabolic pathways, which are increasingly used to reveal mechanisms of interactions. Furthermore, we also emphasize the importance of mining the scientific literature for microbial interactions - a largely overlooked data source for experimentally validated interactions. PMID:27025964

  17. Computational approaches to analogical reasoning current trends

    Richard, Gilles

    2014-01-01

    Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...

  18. Interacting electrons theory and computational approaches

    Martin, Richard M; Ceperley, David M

    2016-01-01

    Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. Many-body methods are becoming essential tools vital for quantitative calculations and understanding materials phenomena in physics, chemistry, materials science and other fields. This book provides a unified exposition of the most-used tools: many-body perturbation theory, dynamical mean field theory and quantum Monte Carlo simulations. Each topic is introduced with a less technical overview for a broad readership, followed by in-depth descriptions and mathematical formulation. Practical guidelines, illustrations and exercises are chosen to enable readers to appreciate the complementary approaches, their relationships, and the advantages and disadvantages of each method. This book is designed for graduate students and researchers who want to use and understand these advanced computational tools, get a broad overview, and acquire a basis for participating in new developments.

  19. Accurate and Efficient Computations of the Greeks for Options Near Expiry Using the Black-Scholes Equations

    Darae Jeong

    2016-01-01

    Full Text Available We investigate the accurate computations for the Greeks using the numerical solutions of the Black-Scholes partial differential equation. In particular, we study the behaviors of the Greeks close to the maturity time and in the neighborhood around the strike price. The Black-Scholes equation is discretized using a nonuniform finite difference method. We propose a new adaptive time-stepping algorithm based on local truncation error. As a test problem for our numerical method, we consider a European cash-or-nothing call option. To show the effect of the adaptive stepping strategy, we calculate option price and its Greeks with various tolerances. Several numerical results confirm that the proposed method is fast, accurate, and practical in computing option price and the Greeks.

  20. An efficient and accurate method for computation of energy release rates in beam structures with longitudinal cracks

    Blasques, José Pedro Albergaria Amaral; Bitsche, Robert

    2015-01-01

    Crack Closure Technique is used for computation of strain energy release rates. The devised framework was employed for analysis of cracks in beams with different cross section geometries. The results show that the accuracy of the proposed method is comparable to that of conventional three......This paper proposes a novel, efficient, and accurate framework for fracture analysis of beam structures with longitudinal cracks. The three-dimensional local stress field is determined using a high-fidelity beam model incorporating a finite element based cross section analysis tool. The Virtual......-dimensional solid finite element models while using only a fraction of the computation time....

  1. Sculpting the band gap: a computational approach.

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D A

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  2. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  3. Efficient and accurate statistical analog yield optimization and variation-aware circuit sizing based on computational intelligence techniques

    Liu, Bo; Fernández, Francisco V.; Gielen, Georges

    2011-01-01

    In nanometer complementary metal-oxide-semiconductor technologies, worst-case design methods and response-surface-based yield optimization methods face challenges in accuracy. Monte-Carlo (MC) simulation is general and accurate for yield estimation, but its efficiency is not high enough to make MC-based analog yield optimization, which requires many yield estimations, practical. In this paper, techniques inspired by computational intelligence are used to speed up yield optimization without sa...

  4. Accurate and Efficient Computations of the Greeks for Options Near Expiry Using the Black-Scholes Equations

    Darae Jeong; Minhyun Yoo; Junseok Kim

    2016-01-01

    We investigate the accurate computations for the Greeks using the numerical solutions of the Black-Scholes partial differential equation. In particular, we study the behaviors of the Greeks close to the maturity time and in the neighborhood around the strike price. The Black-Scholes equation is discretized using a nonuniform finite difference method. We propose a new adaptive time-stepping algorithm based on local truncation error. As a test problem for our numerical method, we consider a Eur...

  5. Fast and accurate earthquake location within complex medium using a hybrid global-local inversion approach

    Chaoying Bai; Rui Zhao; Stewart Greenhalgh

    2009-01-01

    A novel hybrid approach for earthquake location is proposed which uses a combined coarse global search and fine local inversion with a minimum search routine, plus an examination of the root mean squares (RMS) error distribution. The method exploits the advantages of network ray tracing and robust formulation of the Frechet derivatives to simultaneously update all possible initial source parameters around most local minima (including the global minimum) in the solution space, and finally to determine the likely global solution. Several synthetic examples involving a 3-D complex velocity model and a challenging source-receiver layout are used to demonstrate the capability of the newly-developed method. This new global-local hybrid solution technique not only incorporates the significant benefits of our recently published hypocenter determination procedure for multiple earthquake parameters, but also offers the attractive features of global optimal searching in the RMS travel time error distribution. Unlike the traditional global search method, for example, the Monte Carlo approach, where millions of tests have to be done to find the final global solution, the new method only conducts a matrix inversion type local search but does it multiple times simultaneously throughout the model volume to seek a global solution. The search is aided by inspection of the RMS error distribution. Benchmark tests against two popular approaches, the direct grid search method and the oct-tree important sampling method, indicate that the hybrid global-local inversion yields comparable location accuracy and is not sensitive to modest level of noise data, but more importantly it offers two-order of magnitude speed-up in computational effort. Such an improvement, combined with high accuracy, make it a promising hypocenter determination scheme in earthquake early warning, tsunami early warning, rapid hazard assessment and emergency response after strong earthquake occurrence.

  6. Computational modeling approaches in gonadotropin signaling.

    Ayoub, Mohammed Akli; Yvinec, Romain; Crépieux, Pascale; Poupon, Anne

    2016-07-01

    Follicle-stimulating hormone and LH play essential roles in animal reproduction. They exert their function through binding to their cognate receptors, which belong to the large family of G protein-coupled receptors. This recognition at the plasma membrane triggers a plethora of cellular events, whose processing and integration ultimately lead to an adapted biological response. Understanding the nature and the kinetics of these events is essential for innovative approaches in drug discovery. The study and manipulation of such complex systems requires the use of computational modeling approaches combined with robust in vitro functional assays for calibration and validation. Modeling brings a detailed understanding of the system and can also be used to understand why existing drugs do not work as well as expected, and how to design more efficient ones. PMID:27165991

  7. Accurate computation of wave loads on a bottom fixed circular cylinder

    Paulsen, Bo Terp; Bredmose, Henrik; Bingham, Harry B.

    2012-01-01

    to a range of other problems. The central idea is to drive an inner CFD model that resolves the flow around the structure with an outer wave model that is based on potential flow theory. By letting the potential flow solver describe the waves in the outer flow domain and the Navier-Stokes solver...... describe the flow in the inner domain a fast and accurate description of wave loads on offshore structures is obtained, even for breaking waves. Engsig-Karup et. al [1] have recently developed a fully nonlinear potential flow solver (OceanWave3D) to represent propagation and development of fully nonlinear...

  8. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm

  9. Computer subroutine ISUDS accurately solves large system of simultaneous linear algebraic equations

    Collier, G.

    1967-01-01

    Computer program, an Iterative Scheme Using a Direct Solution, obtains double precision accuracy using a single-precision coefficient matrix. ISUDS solves a system of equations written in matrix form as AX equals B, where A is a square non-singular coefficient matrix, X is a vector, and B is a vector.

  10. A novel approach for accurate radiative transfer in cosmological hydrodynamic simulations

    Petkova, Margarita; Springel, Volker

    2011-08-01

    We present a numerical implementation of radiative transfer based on an explicitly photon-conserving advection scheme, where radiative fluxes over the cell interfaces of a structured or unstructured mesh are calculated with a second-order reconstruction of the intensity field. The approach employs a direct discretization of the radiative transfer equation in Boltzmann form with adjustable angular resolution that, in principle, works equally well in the optically-thin and optically-thick regimes. In our most general formulation of the scheme, the local radiation field is decomposed into a linear sum of directional bins of equal solid angle, tessellating the unit sphere. Each of these 'cone fields' is transported independently, with constant intensity as a function of the direction within the cone. Photons propagate at the speed of light (or optionally using a reduced speed of light approximation to allow larger time-steps), yielding a fully time-dependent solution of the radiative transfer equation that can naturally cope with an arbitrary number of sources, as well as with scattering. The method casts sharp shadows, subject to the limitations induced by the adopted angular resolution. If the number of point sources is small and scattering is unimportant, our implementation can alternatively treat each source exactly in angular space, producing shadows whose sharpness is only limited by the grid resolution. A third hybrid alternative is to treat only a small number of the locally most luminous point sources explicitly, with the rest of the radiation intensity followed in a radiative diffusion approximation. We have implemented the method in the moving-mesh code AREPO, where it is coupled to the hydrodynamics in an operator-splitting approach that subcycles the radiative transfer alternately with the hydrodynamical evolution steps. We also discuss our treatment of basic photon sink processes relevant to cosmological reionization, with a chemical network that can

  11. Necessary conditions for accurate computations of three-body partial decay widths

    Garrido, E; Fedorov, D V

    2008-01-01

    The partial width for decay of a resonance into three fragments is largely determined at distances where the energy is smaller than the effective potential producing the corresponding wave function. At short distances the many-body properties are accounted for by preformation or spectroscopic factors. We use the adiabatic expansion method combined with the WKB approximation to obtain the indispensable cluster model wave functions at intermediate and larger distances. We test the concept by deriving conditions for the minimal basis expressed in terms of partial waves and radial nodes. We compare results for different effective interactions and methods. Agreement is found with experimental values for a sufficiently large basis. We illustrate the ideas with realistic examples from $\\alpha$-emission of $^{12}$C and two-proton emission of $^{17}$Ne. Basis requirements for accurate momentum distributions are briefly discussed.

  12. Accurate computation of surface stresses and forces with immersed boundary methods

    Goza, Andres; Morley, Benjamin; Colonius, Tim

    2016-01-01

    Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is a...

  13. Accurate and efficient computation of the Green's tensor for stratified media

    Gay-Balmaz, P.; Martin, O. J. F.; Paulus, M.

    2000-01-01

    We present a technique for the computation of the Green's tensor in three-dimensional stratified media composed of an arbitrary number of layers with different permittivities and permeabilities (including metals with a complex permittivity). The practical implementation of this technique is discussed in detail. In particular, we show how to efficiently handle the singularities occurring in Sommerfeld integrals, by deforming the integration path in the complex plane. Examples assess the accura...

  14. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  15. A Review of Computer Vision based Algorithms for accurate and efficient Object Detection

    Aneissha Chebolu

    2013-05-01

    Full Text Available In this paper two vision-based algorithms are adopted to locate and identify the objects and obstacles from the environment. In recent days, robot vision and navigation are emerging as essential services especially in hazardous environments. In this work, two vision based techniques such as color based thresholding and template matching – both correlation based similarity measure and FFT (Fast Fourier Transform based have been adopted and used for object identification and classification using the image captured in CCD camera attached to the robotic arm. Then, the robotic arm manipulator is integrated with the computer for futher manipulation of the objects based on the application.

  16. Accurate Experiment to Computation Coupling for Understanding QH-mode physics using NIMROD

    King, J. R.; Burrell, K. H.; Garofalo, A. M.; Groebner, R. J.; Hanson, J. D.; Hebert, J. D.; Hudson, S. R.; Pankin, A. Y.; Kruger, S. E.; Snyder, P. B.

    2015-11-01

    It is desirable to have an ITER H-mode regime that is quiescent to edge-localized modes (ELMs). The quiescent H-mode (QH-mode) with edge harmonic oscillations (EHO) is one such regime. High quality equilibria are essential for accurate EHO simulations with initial-value codes such as NIMROD. We include profiles outside the LCFS which generate associated currents when we solve the Grad-Shafranov equation with open-flux regions using the NIMEQ solver. The new solution is an equilibrium that closely resembles the original reconstruction (which does not contain open-flux currents). This regenerated equilibrium is consistent with the profiles that are measured by the high quality diagnostics on DIII-D. Results from nonlinear NIMROD simulations of the EHO are presented. The full measured rotation profiles are included in the simulation. The simulation develops into a saturated state. The saturation mechanism of the EHO is explored and simulation is compared to magnetic-coil measurements. This work is currently supported in part by the US DOE Office of Science under awards DE-FC02-04ER54698, DE-AC02-09CH11466 and the SciDAC Center for Extended MHD Modeling.

  17. Music Genre Classification Systems - A Computational Approach

    Ahrendt, Peter

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought...... that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular systems which use the raw audio signal as input to estimate the corresponding genre. This is in...... contrast to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...

  18. A coupled BEM/scaled boundary FEM formulation for accurate computations in linear elastic fracture mechanics.

    Bird, G. E.; Trevelyan, J; Augarde, C.E.

    2010-01-01

    Issues relating to the practical implementation of the coupled boundary element–scaled boundary finite element method are addressed in this paper. A detailed approach highlights fully the process of applying boundary conditions, including the treatment of examples in which the assumptions made in previous work are no longer valid. Verification of the method is undertaken by means of estimating stress intensity factors and comparing them against analytical solutions. The coupled algorithm show...

  19. A computational approach to negative priming

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  20. Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters

    Strutzenberg, Louise L.; Williams, Brandon R.

    2011-01-01

    Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.

  1. A fast and accurate method to compute the mass return from multiple stellar populations

    Calura, F; Nipoti, C

    2013-01-01

    The mass returned to the ambient medium by aging stellar populations over cosmological times sums up to a significant fraction (20% - 30% or more) of their initial mass. This continuous mass injection plays a fundamental role in phenomena such as galaxy formation and evolution, fueling of supermassive black holes in galaxies and the consequent (negative and positive) feedback phenomena, and the origin of multiple stellar populations in globular clusters. In numerical simulations the calculation of the mass return can be time consuming, since it requires at each time step the evaluation of a convolution integral over the whole star formation history, so the computational time increases quadratically with the number of time-steps. The situation can be especially critical in hydrodynamical simulations, where different grid points are characterized by different star formation histories, and the gas cooling and heating times are shorter by orders of magnitude than the characteristic stellar lifetimes. In this pape...

  2. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  3. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  4. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Shanis Barnard

    Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is

  5. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  6. SPEECH RECOGNITION - A COMPUTER MEDIATED APPROACH

    Kaliyaperumal Karthikeyan

    2012-01-01

    The computer revolution is now well advanced, but although we see a starting preparation of computer machines in many forms of work people do, the domain of computers is still significantly small because of the specialized training needed to use them and the lack of intelligence in computer systems. In the history of computer science five generations have passed by, each adding a new innovative technology that brought computers nearer and nearer to the people. Now it is sixth generation, whos...

  7. A Desktop Grid Computing Approach for Scientific Computing and Visualization

    Constantinescu-Fuløp, Zoran

    2008-01-01

    Scientific Computing is the collection of tools, techniques, and theories required to solve on a computer, mathematical models of problems from science and engineering, and its main goal is to gain insight in such problems. Generally, it is difficult to understand or communicate information from complex or large datasets generated by Scientific Computing methods and techniques (computational simulations, complex experiments, observational instruments etc.). Therefore, support of Scientific Vi...

  8. An accurate and scalable O(N) algorithm for First-Principles Molecular Dynamics computations on petascale computers and beyond

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-03-01

    We present a truly scalable First-Principles Molecular Dynamics algorithm with O(N) complexity and fully controllable accuracy, capable of simulating systems of sizes that were previously impossible with this degree of accuracy. By avoiding global communication, we have extended W. Kohn's condensed matter ``nearsightedness'' principle to a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wavefunctions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 100,000 atoms on 100,000 processors, with a wall-clock time of the order of one minute per molecular dynamics time step. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. Computational approaches to predict bacteriophage-host relationships.

    Edwards, Robert A; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E

    2016-03-01

    Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus-host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage-host signals. Sequence homology approaches are the most effective at identifying known phage-host pairs. Compositional and abundance-based methods contain significant signal for phage-host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage-host relationships, with potential relevance for medical and industrial applications. PMID:26657537

  10. Trajectory Evaluation of Rotor-Flying Robots Using Accurate Inverse Computation Based on Algorithm Differentiation

    Yuqing He

    2014-01-01

    Full Text Available Autonomous maneuvering flight control of rotor-flying robots (RFR is a challenging problem due to the highly complicated structure of its model and significant uncertainties regarding many aspects of the field. As a consequence, it is difficult in many cases to decide whether or not a flight maneuver trajectory is feasible. It is necessary to conduct an analysis of the flight maneuvering ability of an RFR prior to test flight. Our aim in this paper is to use a numerical method called algorithm differentiation (AD to solve this problem. The basic idea is to compute the internal state (i.e., attitude angles and angular rates and input profiles based on predetermined maneuvering trajectory information denoted by the outputs (i.e., positions and yaw angle and their higher-order derivatives. For this purpose, we first present a model of the RFR system and show that it is flat. We then cast the procedure for obtaining the required state/input based on the desired outputs as a static optimization problem, which is solved using AD and a derivative based optimization algorithm. Finally, we test our proposed method using a flight maneuver trajectory to verify its performance.

  11. Towards an accurate and computationally-efficient modelling of Fe(II)-based spin crossover materials.

    Vela, Sergi; Fumanal, Maria; Ribas-Arino, Jordi; Robert, Vincent

    2015-07-01

    The DFT + U methodology is regarded as one of the most-promising strategies to treat the solid state of molecular materials, as it may provide good energetic accuracy at a moderate computational cost. However, a careful parametrization of the U-term is mandatory since the results may be dramatically affected by the selected value. Herein, we benchmarked the Hubbard-like U-term for seven Fe(ii)N6-based pseudo-octahedral spin crossover (SCO) compounds, using as a reference an estimation of the electronic enthalpy difference (ΔHelec) extracted from experimental data (T1/2, ΔS and ΔH). The parametrized U-value obtained for each of those seven compounds ranges from 2.37 eV to 2.97 eV, with an average value of U = 2.65 eV. Interestingly, we have found that this average value can be taken as a good starting point since it leads to an unprecedented mean absolute error (MAE) of only 4.3 kJ mol(-1) in the evaluation of ΔHelec for the studied compounds. Moreover, by comparing our results on the solid state and the gas phase of the materials, we quantify the influence of the intermolecular interactions on the relative stability of the HS and LS states, with an average effect of ca. 5 kJ mol(-1), whose sign cannot be generalized. Overall, the findings reported in this manuscript pave the way for future studies devoted to understand the crystalline phase of SCO compounds, or the adsorption of individual molecules on organic or metallic surfaces, in which the rational incorporation of the U-term within DFT + U yields the required energetic accuracy that is dramatically missing when using bare-DFT functionals. PMID:26040609

  12. Accurate micro-computed tomography imaging of pore spaces in collagen-based scaffold.

    Zidek, Jan; Vojtova, Lucy; Abdel-Mohsen, A M; Chmelik, Jiri; Zikmund, Tomas; Brtnikova, Jana; Jakubicek, Roman; Zubal, Lukas; Jan, Jiri; Kaiser, Jozef

    2016-06-01

    In this work we have used X-ray micro-computed tomography (μCT) as a method to observe the morphology of 3D porous pure collagen and collagen-composite scaffolds useful in tissue engineering. Two aspects of visualizations were taken into consideration: improvement of the scan and investigation of its sensitivity to the scan parameters. Due to the low material density some parts of collagen scaffolds are invisible in a μCT scan. Therefore, here we present different contrast agents, which increase the contrast of the scanned biopolymeric sample for μCT visualization. The increase of contrast of collagenous scaffolds was performed with ceramic hydroxyapatite microparticles (HAp), silver ions (Ag(+)) and silver nanoparticles (Ag-NPs). Since a relatively small change in imaging parameters (e.g. in 3D volume rendering, threshold value and μCT acquisition conditions) leads to a completely different visualized pattern, we have optimized these parameters to obtain the most realistic picture for visual and qualitative evaluation of the biopolymeric scaffold. Moreover, scaffold images were stereoscopically visualized in order to better see the 3D biopolymer composite scaffold morphology. However, the optimized visualization has some discontinuities in zoomed view, which can be problematic for further analysis of interconnected pores by commonly used numerical methods. Therefore, we applied the locally adaptive method to solve discontinuities issue. The combination of contrast agent and imaging techniques presented in this paper help us to better understand the structure and morphology of the biopolymeric scaffold that is crucial in the design of new biomaterials useful in tissue engineering. PMID:27153826

  13. Computability and Analysis, a Historical Approach

    Brattka, Vasco

    2016-01-01

    The history of computability theory and and the history of analysis are surprisingly intertwined since the beginning of the twentieth century. For one, \\'Emil Borel discussed his ideas on computable real number functions in his introduction to measure theory. On the other hand, Alan Turing had computable real numbers in mind when he introduced his now famous machine model. Here we want to focus on a particular aspect of computability and analysis, namely on computability properties of theorem...

  14. How accurate is image-free computer navigation for hip resurfacing arthroplasty? An anatomical investigation

    The existing studies concerning image-free navigated implantation of hip resurfacing arthroplasty are based on analysis of the accuracy of conventional biplane radiography. Studies have shown that these measurements in biplane radiography are imprecise and that precision is improved by use of three-dimensional (3D) computer tomography (CT) scans. To date, the accuracy of image-free navigation devices for hip resurfacing has not been investigated using CT scans, and anteversion accuracy has not been assessed at all. Furthermore, no study has tested the reliability of the navigation software concerning the automatically calculated implant position. The purpose of our study was to analyze the accuracy of varus-valgus and anteversion using an image-free hip resurfacing navigation device. The reliability of the software-calculated implant position was also determined. A total of 32 femoral hip resurfacing components were implanted on embalmed human femurs using an image-free navigation device. In all, 16 prostheses were implanted with the proposed position generated by the navigation software; the 16 prostheses were inserted in an optimized valgus position. A 3D CT scan was undertaken before and after operation. The difference between the measured and planned varus-valgus angle averaged 1 deg (mean±standard deviation (SD): group I, 1 deg±2 deg; group II, 1 deg±1 deg). The mean±SD difference between femoral neck anteversion and anteversion of the implant was 4 deg (group I, 4 deg±4 deg; group II, 4 deg±3 deg). The software-calculated implant position differed 7 deg±8 deg from the measured neck-shaft angle. These measured accuracies did not differ significantly between the two groups. Our study proved the high accuracy of the navigation device concerning the most important biomechanical factor: the varus-valgus angle. The software calculation of the proposed implant position has been shown to be inaccurate and needs improvement. Hence, manual adjustment of the

  15. Accurate treatments of electrostatics for computer simulations of biological systems: A brief survey of developments and existing problems

    Yi, Sha-Sha; Pan, Cong; Hu, Zhong-Han

    2015-12-01

    Modern computer simulations of biological systems often involve an explicit treatment of the complex interactions among a large number of molecules. While it is straightforward to compute the short-ranged Van der Waals interaction in classical molecular dynamics simulations, it has been a long-lasting issue to develop accurate methods for the longranged Coulomb interaction. In this short review, we discuss three types of methodologies for the accurate treatment of electrostatics in simulations of explicit molecules: truncation-type methods, Ewald-type methods, and mean-field-type methods. Throughout the discussion, we brief the formulations and developments of these methods, emphasize the intrinsic connections among the three types of methods, and focus on the existing problems which are often associated with the boundary conditions of electrostatics. This brief survey is summarized with a short perspective on future trends along the method developments and applications in the field of biological simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 91127015 and 21522304) and the Open Project from the State Key Laboratory of Theoretical Physics, and the Innovation Project from the State Key Laboratory of Supramolecular Structure and Materials.

  16. Accurate computations of the structures and binding energies of the imidazole⋯benzene and pyrrole⋯benzene complexes

    Ahnen, Sandra; Hehn, Anna-Sophia [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Vogiatzis, Konstantinos D. [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede-Straße 1a, D-76131 Karlsruhe (Germany); Trachsel, Maria A.; Leutwyler, Samuel [Department of Chemistry and Biochemistry, University of Bern, Freiestrasse 3, CH-3012 Bern (Switzerland); Klopper, Wim, E-mail: klopper@kit.edu [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede-Straße 1a, D-76131 Karlsruhe (Germany)

    2014-09-30

    Highlights: • We have computed accurate binding energies of two NH⋯π hydrogen bonds. • We compare to results from dispersion-corrected density-functional theory. • A double-hybrid functional with explicit correlation has been proposed. • First results of explicitly-correlated ring-coupled-cluster theory are presented. • A double-hybrid functional with random-phase approximation is investigated. - Abstract: Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯benzene and pyrrole⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller–Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation.

  17. Accurate computations of the structures and binding energies of the imidazole⋯benzene and pyrrole⋯benzene complexes

    Highlights: • We have computed accurate binding energies of two NH⋯π hydrogen bonds. • We compare to results from dispersion-corrected density-functional theory. • A double-hybrid functional with explicit correlation has been proposed. • First results of explicitly-correlated ring-coupled-cluster theory are presented. • A double-hybrid functional with random-phase approximation is investigated. - Abstract: Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯benzene and pyrrole⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller–Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation

  18. An accurate scheme to solve cluster dynamics equations using a Fokker-Planck approach

    Jourdan, Thomas; Legoll, Frédéric; Monasse, Laurent

    2016-01-01

    We present a numerical method to accurately simulate particle size distributions within the formalism of rate equation cluster dynamics. This method is based on a discretization of the associated Fokker-Planck equation. We show that particular care has to be taken to discretize the advection part of the Fokker-Planck equation, in order to avoid distortions of the distribution due to numerical diffusion. For this purpose we use the Kurganov-Noelle-Petrova scheme coupled with the monotonicity-preserving reconstruction MP5, which leads to very accurate results. The interest of the method is highlighted on the case of loop coarsening in aluminum. We show that the choice of the models to describe the energetics of loops does not significantly change the normalized loop distribution, while the choice of the models for the absorption coefficients seems to have a significant impact on it.

  19. An accurate and efficient experimental approach for characterization of the complex oral microbiota

    Zheng, Wei; Tsompana, Maria; Ruscitto, Angela; Sharma, Ashu; Genco, Robert; Sun, Yijun; Buck, Michael J.

    2015-01-01

    Background Currently, taxonomic interrogation of microbiota is based on amplification of 16S rRNA gene sequences in clinical and scientific settings. Accurate evaluation of the microbiota depends heavily on the primers used, and genus/species resolution bias can arise with amplification of non-representative genomic regions. The latest Illumina MiSeq sequencing chemistry has extended the read length to 300 bp, enabling deep profiling of large number of samples in a single paired-end reaction ...

  20. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    Hrubý Jan

    2012-04-01

    Full Text Available Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  1. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  2. Human Computer Interaction: An intellectual approach

    Mr. Kuntal Saroha; Sheela Sharma; Gurpreet Bhatia

    2011-01-01

    This paper discusses the research that has been done in thefield of Human Computer Interaction (HCI) relating tohuman psychology. Human-computer interaction (HCI) isthe study of how people design, implement, and useinteractive computer systems and how computers affectindividuals, organizations, and society. This encompassesnot only ease of use but also new interaction techniques forsupporting user tasks, providing better access toinformation, and creating more powerful forms ofcommunication. ...

  3. Inferring haplotypes at the NAT2 locus: the computational approach

    Sabbagh Audrey

    2005-06-01

    Full Text Available Abstract Background Numerous studies have attempted to relate genetic polymorphisms within the N-acetyltransferase 2 gene (NAT2 to interindividual differences in response to drugs or in disease susceptibility. However, genotyping of individuals single-nucleotide polymorphisms (SNPs alone may not always provide enough information to reach these goals. It is important to link SNPs in terms of haplotypes which carry more information about the genotype-phenotype relationship. Special analytical techniques have been designed to unequivocally determine the allocation of mutations to either DNA strand. However, molecular haplotyping methods are labour-intensive and expensive and do not appear to be good candidates for routine clinical applications. A cheap and relatively straightforward alternative is the use of computational algorithms. The objective of this study was to assess the performance of the computational approach in NAT2 haplotype reconstruction from phase-unknown genotype data, for population samples of various ethnic origin. Results We empirically evaluated the effectiveness of four haplotyping algorithms in predicting haplotype phases at NAT2, by comparing the results with those directly obtained through molecular haplotyping. All computational methods provided remarkably accurate and reliable estimates for NAT2 haplotype frequencies and individual haplotype phases. The Bayesian algorithm implemented in the PHASE program performed the best. Conclusion This investigation provides a solid basis for the confident and rational use of computational methods which appear to be a good alternative to infer haplotype phases in the particular case of the NAT2 gene, where there is near complete linkage disequilibrium between polymorphic markers.

  4. Physics and computer science: quantum computation and other approaches

    Salvador E. Venegas-Andraca

    2011-01-01

    This is a position paper written as an introduction to the special volume on quantum algorithms I edited for the journal Mathematical Structures in Computer Science (Volume 20 - Special Issue 06 (Quantum Algorithms), 2010).

  5. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  6. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    Jaco Prinsloo

    2016-06-01

    Full Text Available Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID technology in combination with GPS and the Global system for Mobile communication (GSM technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz. The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.

  7. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  8. A chemical approach to accurately characterize the coverage rate of gold nanoparticles

    Gold nanoparticles (AuNPs) have been widely used in many areas, and the nanoparticles usually have to be functionalized with some molecules before use. However, the information about the characterization of the functionalization of the nanoparticles is still limited or unclear, which has greatly restricted the better functionalization and application of AuNPs. Here, we propose a chemical way to accurately characterize the functionalization of AuNPs. Unlike the traditional physical methods, this method, which is based on the catalytic property of AuNPs, may give accurate coverage rate and some derivative information about the functionalization of the nanoparticles with different kinds of molecules. The performance of the characterization has been approved by adopting three independent molecules to functionalize AuNPs, including both covalent and non-covalent functionalization. Some interesting results are thereby obtained, and some are the first time to be revealed. The method may also be further developed as a useful tool for the characterization of a solid surface

  9. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  10. Cluster Computing: A Mobile Code Approach

    Patel, R. B.; Manpreet Singh

    2006-01-01

    Cluster computing harnesses the combined computing power of multiple processors in a parallel configuration. Cluster Computing environments built from commodity hardware have provided a cost-effective solution for many scientific and high-performance applications. In this paper we have presented design and implementation of a cluster based framework using mobile code. The cluster implementation involves the designing of a server named MCLUSTER which manages the configuring, resetting of clust...

  11. What is Computation: An Epistemic Approach

    Wiedermann, Jiří; van Leeuwen, J.

    Berlin: Springer, 2015 - (Italiano, G.; Margaria-Steffen, T.; Pokorný, J.; Quisquater, J.; Wattenhofer, R.), s. 1-13. (Lecture Notes in Computer Science. 8939). ISBN 978-3-662-46077-1. ISSN 0302-9743. [Sofsem 2015. International Conference on Current Trends in Theory and Practice of Computer Science /41./. Pec pod Sněžkou (CZ), 24.01.2015-29.01.2015] R&D Projects: GA ČR GAP202/10/1333 Institutional support: RVO:67985807 Keywords : computation * knowledge generation * information technology Subject RIV: IN - Informatics, Computer Science

  12. Accurate Waveforms for Non-spinning Binary Black Holes using the Effective-one-body Approach

    Buonanno, Alessandra; Pan, Yi; Baker, John G.; Centrella, Joan; Kelly, Bernard J.; McWilliams, Sean T.; vanMeter, James R.

    2007-01-01

    Using numerical relativity as guidance and the natural flexibility of the effective-one-body (EOB) model, we extend the latter so that it can successfully match the numerical relativity waveforms of non-spinning binary black holes during the last stages of inspiral, merger and ringdown. Here, by successfully, we mean with phase differences black-hole masses. The final black-hole mass and spin predicted by the numerical simulations are used to determine the ringdown frequency and decay time of three quasi-normal-mode damped sinusoids that are attached to the EOB inspiral-(plunge) waveform at the light-ring. The accurate EOB waveforms may be employed for coherent searches of gravitational waves emitted by non-spinning coalescing binary black holes with ground-based laser-interferometer detectors.

  13. Practically acquired and modified cone-beam computed tomography images for accurate dose calculation in head and neck cancer

    On-line cone-beam computed tomography (CBCT) may be used to reconstruct the dose for geometric changes of patients and tumors during radiotherapy course. This study is to establish a practical method to modify the CBCT for accurate dose calculation in head and neck cancer. Fan-beam CT (FBCT) and Elekta's CBCT were used to acquire images. The CT numbers for different materials on CBCT were mathematically modified to match them with FBCT. Three phantoms were scanned by FBCT and CBCT for image uniformity, spatial resolution, and CT numbers, and to compare the dose distribution from orthogonal beams. A Rando phantom was scanned and planned with intensity-modulated radiation therapy (IMRT). Finally, two nasopharyngeal cancer patients treated with IMRT had their CBCT image sets calculated for dose comparison. With 360 acquisition of CBCT and high-resolution reconstruction, the uniformity of CT number distribution was improved and the otherwise large variations for background and high-density materials were reduced significantly. The dose difference between FBCT and CBCT was < 2% in phantoms. In the Rando phantom and the patients, the dose-volume histograms were similar. The corresponding isodose curves covering ≥ 90% of prescribed dose on FBCT and CBCT were close to each other (within 2 mm). Most dosimetric differences were from the setup errors related to the interval changes in body shape and tumor response. The specific CBCT acquisition, reconstruction, and CT number modification can generate accurate dose calculation for the potential use in adaptive radiotherapy.

  14. Practically acquired and modified cone-beam computed tomography images for accurate dose calculation in head and neck cancer

    Hu, Chih-Chung [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; Yuanpei Univ., Hsinchu (China). Dept. of Radiological Technology; Huang, Wen-Tao [Yuanpei Univ., Hsinchu (China). Dept. of Radiological Technology; Tsai, Chiao-Ling; Chao, Hsiao-Ling; Huang, Guo-Ming; Wang, Chun-Wei [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; Wu, Jian-Kuen [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; National Taiwan Normal Univ., Taipei (China). Inst. of Electro-Optical Science and Technology; Wu, Chien-Jang [National Taiwan Normal Univ., Taipei (China). Inst. of Electro-Optical Science and Technology; Cheng, Jason Chia-Hsien [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; National Taiwan Univ. Taipei (China). Graduate Inst. of Oncology; National Taiwan Univ. Taipei (China). Graduate Inst. of Clinical Medicine; National Taiwan Univ. Taipei (China). Graduate Inst. of Biomedical Electronics and Bioinformatics

    2011-10-15

    On-line cone-beam computed tomography (CBCT) may be used to reconstruct the dose for geometric changes of patients and tumors during radiotherapy course. This study is to establish a practical method to modify the CBCT for accurate dose calculation in head and neck cancer. Fan-beam CT (FBCT) and Elekta's CBCT were used to acquire images. The CT numbers for different materials on CBCT were mathematically modified to match them with FBCT. Three phantoms were scanned by FBCT and CBCT for image uniformity, spatial resolution, and CT numbers, and to compare the dose distribution from orthogonal beams. A Rando phantom was scanned and planned with intensity-modulated radiation therapy (IMRT). Finally, two nasopharyngeal cancer patients treated with IMRT had their CBCT image sets calculated for dose comparison. With 360 acquisition of CBCT and high-resolution reconstruction, the uniformity of CT number distribution was improved and the otherwise large variations for background and high-density materials were reduced significantly. The dose difference between FBCT and CBCT was < 2% in phantoms. In the Rando phantom and the patients, the dose-volume histograms were similar. The corresponding isodose curves covering {>=} 90% of prescribed dose on FBCT and CBCT were close to each other (within 2 mm). Most dosimetric differences were from the setup errors related to the interval changes in body shape and tumor response. The specific CBCT acquisition, reconstruction, and CT number modification can generate accurate dose calculation for the potential use in adaptive radiotherapy.

  15. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  16. Computer science approach to quantum control

    Janzing, Dominik

    2006-01-01

    This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. It shows that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer.

  17. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS

    Zhibin Miao

    2015-12-01

    Full Text Available With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.

  18. An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).

    Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert

    2015-08-01

    The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255

  19. Ring polymer molecular dynamics fast computation of rate coefficients on accurate potential energy surfaces in local configuration space: Application to the abstraction of hydrogen from methane

    Meng, Qingyong; Chen, Jun; Zhang, Dong H.

    2016-04-01

    To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3 reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ˜20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.

  20. DISTRIBUTED COMPUTING APPROACHES FOR SCALABILITY AND HIGH PERFORMANCE

    MANJULA K A

    2010-06-01

    Full Text Available Distributed computing is a science which solves a large problem by giving small parts of the problem to many computers to solve and then combining the solutions for the parts into a solution for the problem. This distributed computing framework suits to projects, which have an insatiable appetite for computing power. Two such popular projects are SETI@Home and Folding@Home. Different architectures and approaches for distributed computing are being proposed as part of the works progressing around the world. One way ofdistributing both data and computing power, known as grid computing, taps the Internet to put petabyte processing on every researcher's desktop. Grid technology is finding its way out of the academic incubator and entering into commercial environments. Cloud computing, which is a variant to grid computing, has emerged as a potentially competing approach for architecting large distributed systems. Clouds can be viewed as a logical and next higher-level abstraction from Grids.

  1. Assessing creativity in computer music ensembles: a computational approach

    Comajuncosas, Josep M.

    2016-01-01

    Over the last decade Laptop Orchestras and Mobile Ensembles have proliferated. As a result, a large body of research has arisen on infrastructure, evaluation, design principles and compositional methodologies for Computer Music Ensembles (CME). However, little has been addressed and very little is known about the challenges and opportunities provided by CMEs for creativity in musical performance. Therefore, one of the most common issues CMEs have to deal with is the lack of ...

  2. Human Computer Interaction: An intellectual approach

    Kuntal Saroha

    2011-08-01

    Full Text Available This paper discusses the research that has been done in thefield of Human Computer Interaction (HCI relating tohuman psychology. Human-computer interaction (HCI isthe study of how people design, implement, and useinteractive computer systems and how computers affectindividuals, organizations, and society. This encompassesnot only ease of use but also new interaction techniques forsupporting user tasks, providing better access toinformation, and creating more powerful forms ofcommunication. It involves input and output devices andthe interaction techniques that use them; how information ispresented and requested; how the computer’s actions arecontrolled and monitored; all forms of help, documentation,and training; the tools used to design, build, test, andevaluate user interfaces; and the processes that developersfollow when creating Interfaces.

  3. Uncertainty in biology: a computational modeling approach

    2015-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building...

  4. NETWORK SECURITY: AN APPROACH TOWARDS SECURE COMPUTING

    Rahul Pareek

    2011-01-01

    The security of computer networks plays a strategic role in modern computer systems. In order to enforce high protection levels against malicious attack, a number of software tools have been currently developed. Intrusion Detection System has recently become a heated research topic due to its capability of detecting and preventing the attacks from malicious network users. A pattern matching IDS for network security has been proposed in this paper. Many network security applications...

  5. Mobile Cloud Computing: A Review on Smartphone Augmentation Approaches

    Abolfazli, Saeid; Sanaei, Zohreh; Gani, Abdullah

    2012-01-01

    Smartphones have recently gained significant popularity in heavy mobile processing while users are increasing their expectations toward rich computing experience. However, resource limitations and current mobile computing advancements hinder this vision. Therefore, resource-intensive application execution remains a challenging task in mobile computing that necessitates device augmentation. In this article, smartphone augmentation approaches are reviewed and classified in two main groups, name...

  6. Computational approaches to enhance mass spectrometry-based proteomics

    Neuhauser, Nadin

    2014-01-01

    In this thesis I present three computational approaches that improve the analysis of mass spectrometry-based proteomics data. The novel search engine Andromeda allows efficient identification of peptides and proteins. Implementation of a rule-based expert system provides more detailed information contained in the mass spectra. Furthermore I adapted our computational proteomics pipeline to high performance computers.

  7. Computational dynamics for robotics systems using a non-strict computational approach

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  8. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  9. Machine learning and synthetic aperture refocusing approach for more accurate masking of fish bodies in 3D PIV data

    Ford, Logan; Bajpayee, Abhishek; Techet, Alexandra

    2015-11-01

    3D particle image velocimetry (PIV) is becoming a popular technique to study biological flows. PIV images that contain fish or other animals around which flow is being studied, need to be appropriately masked in order to remove the animal body from the 3D reconstructed volumes prior to calculating particle displacement vectors. Presented here is a machine learning and synthetic aperture (SA) refocusing based approach for more accurate masking of fish from reconstructed intensity fields for 3D PIV purposes. Using prior knowledge about the 3D shape and appearance of the fish along with SA refocused images at arbitrarily oriented focal planes, the location and orientation of a fish in a reconstructed volume can be accurately determined. Once the location and orientation of a fish in a volume is determined, it can be masked out.

  10. A false sense of security? Can tiered approach be trusted to accurately classify immunogenicity samples?

    Jaki, Thomas; Allacher, Peter; Horling, Frank

    2016-09-01

    Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. PMID:27262992

  11. A novel approach for accurate radiative transfer in cosmological hydrodynamic simulations

    Petkova, Margarita

    2010-01-01

    We present a numerical implementation of radiative transfer based on an explicitly photon-conserving advection scheme, where radiative fluxes over the cell interfaces of a structured or unstructured mesh are calculated with a second-order reconstruction of the intensity field. The approach employs a direct discretisation of the radiative transfer equation in Boltzmann form with adjustable angular resolution that in principle works equally well in the optically thin and optically thick regimes. In our most general formulation of the scheme, the local radiation field is decomposed into a linear sum of directional bins of equal solid-angle, tessellating the unit sphere. Each of these "cone-fields" is transported independently, with constant intensity as a function of direction within the cone. Photons propagate at the speed of light (or optionally using a reduced speed of light approximation to allow larger timesteps), yielding a fully time-dependent solution of the radiative transfer equation that can naturally...

  12. Human brain mapping: Experimental and computational approaches

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J. [Los Alamos National Lab., NM (US); Sanders, J. [Albuquerque VA Medical Center, NM (US); Belliveau, J. [Massachusetts General Hospital, Boston, MA (US)

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  13. Uncertainty in biology a computational modeling approach

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  14. Heterogeneous Computing in Economics: A Simplified Approach

    Dziubinski, Matt P.; Grassi, Stefano

    This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of the C++ Accelerated Massive Parallelism recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (2011) we document a...

  15. Heterogeneous Computing in Economics: A Simplified Approach

    Dziubinski, Matt P.; Grassi, Stefano

    2012-01-01

    This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of the C++ Accelerated Massive Parallelism recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (2011) we document a speed gain together with a simplified programming style that naturally enables parallelization.

  16. Cluster Computing: A Mobile Code Approach

    R. B. Patel

    2006-01-01

    Full Text Available Cluster computing harnesses the combined computing power of multiple processors in a parallel configuration. Cluster Computing environments built from commodity hardware have provided a cost-effective solution for many scientific and high-performance applications. In this paper we have presented design and implementation of a cluster based framework using mobile code. The cluster implementation involves the designing of a server named MCLUSTER which manages the configuring, resetting of cluster. It allows a user to provide necessary information regarding the application to be executed via a graphical user interface (GUI. Framework handles- the generation of application mobile code and its distribution to appropriate client nodes, efficient handling of results so generated and communicated by a number of client nodes and recording of execution time of application. The client node receives and executes the mobile code that defines the distributed job submitted by MCLUSTER server and replies the results back. We have also the analyzed the performance of the developed system emphasizing the tradeoff between communication and computation overhead.

  17. LOCUSTRA: accurate prediction of local protein structure using a two-layer support vector machine approach.

    Zimmermann, Olav; Hansmann, Ulrich H E

    2008-09-01

    Constraint generation for 3d structure prediction and structure-based database searches benefit from fine-grained prediction of local structure. In this work, we present LOCUSTRA, a novel scheme for the multiclass prediction of local structure that uses two layers of support vector machines (SVM). Using a 16-letter structural alphabet from de Brevern et al. (Proteins: Struct., Funct., Bioinf. 2000, 41, 271-287), we assess its prediction ability for an independent test set of 222 proteins and compare our method to three-class secondary structure prediction and direct prediction of dihedral angles. The prediction accuracy is Q16=61.0% for the 16 classes of the structural alphabet and Q3=79.2% for a simple mapping to the three secondary classes helix, sheet, and coil. We achieve a mean phi(psi) error of 24.74 degrees (38.35 degrees) and a median RMSDA (root-mean-square deviation of the (dihedral) angles) per protein chain of 52.1 degrees. These results compare favorably with related approaches. The LOCUSTRA web server is freely available to researchers at http://www.fz-juelich.de/nic/cbb/service/service.php. PMID:18763837

  18. High order accurate and low dissipation method for unsteady compressible viscous flow computation on helicopter rotor in forward flight

    Xu, Li; Weng, Peifen

    2014-02-01

    An improved fifth-order weighted essentially non-oscillatory (WENO-Z) scheme combined with the moving overset grid technique has been developed to compute unsteady compressible viscous flows on the helicopter rotor in forward flight. In order to enforce periodic rotation and pitching of the rotor and relative motion between rotor blades, the moving overset grid technique is extended, where a special judgement standard is presented near the odd surface of the blade grid during search donor cells by using the Inverse Map method. The WENO-Z scheme is adopted for reconstructing left and right state values with the Roe Riemann solver updating the inviscid fluxes and compared with the monotone upwind scheme for scalar conservation laws (MUSCL) and the classical WENO scheme. Since the WENO schemes require a six point stencil to build the fifth-order flux, the method of three layers of fringes for hole boundaries and artificial external boundaries is proposed to carry out flow information exchange between chimera grids. The time advance on the unsteady solution is performed by the full implicit dual time stepping method with Newton type LU-SGS subiteration, where the solutions of pseudo steady computation are as the initial fields of the unsteady flow computation. Numerical results on non-variable pitch rotor and periodic variable pitch rotor in forward flight reveal that the approach can effectively capture vortex wake with low dissipation and reach periodic solutions very soon.

  19. Computational approach for a pair of bubble coalescence process

    Nurul Hasan, E-mail: nurul_hasan@petronas.com.my [Department of Chemical Engineering, Universiti Teknologi Petronas, Bandar Seri Iskandar, Perak 31750 (Malaysia); Zalinawati binti Zakaria [Department of Chemical Engineering, Universiti Teknologi Petronas, Bandar Seri Iskandar, Perak 31750 (Malaysia)

    2011-06-15

    The coalescence of bubbles has great value in mineral recovery and oil industry. In this paper, two co-axial bubbles rising in a cylinder is modelled to study the coalescence of bubbles for four computational experimental test cases. The Reynolds' (Re) number is chosen in between 8.50 and 10, Bond number, Bo {approx}4.25-50, Morton number, M 0.0125-14.7. The viscosity ratio ({mu}{sub r}) and density ratio ({rho}{sub r}) of liquid to bubble are kept constant (100 and 850 respectively). It was found that the Bo number has significant effect on the coalescence process for constant Re, {mu}{sub r} and {rho}{sub r}. The bubble-bubble distance over time was validated against published experimental data. The results show that VOF approach can be used to model these phenomena accurately. The surface tension was changed to alter the Bo and density of the fluids to alter the Re and M, keeping the {mu}{sub r} and {rho}{sub r} the same. It was found that for lower Bo, the bubble coalesce is slower and the pocket at the lower part of the leading bubble is less concave (towards downward) which is supported by the experimental data.

  20. A simple, efficient, and high-order accurate curved sliding-mesh interface approach to spectral difference method on coupled rotating and stationary domains

    Zhang, Bin; Liang, Chunlei

    2015-08-01

    This paper presents a simple, efficient, and high-order accurate sliding-mesh interface approach to the spectral difference (SD) method. We demonstrate the approach by solving the two-dimensional compressible Navier-Stokes equations on quadrilateral grids. This approach is an extension of the straight mortar method originally designed for stationary domains [7,8]. Our sliding method creates curved dynamic mortars on sliding-mesh interfaces to couple rotating and stationary domains. On the nonconforming sliding-mesh interfaces, the related variables are first projected from cell faces to mortars to compute common fluxes, and then the common fluxes are projected back from the mortars to the cell faces to ensure conservation. To verify the spatial order of accuracy of the sliding-mesh spectral difference (SSD) method, both inviscid and viscous flow cases are tested. It is shown that the SSD method preserves the high-order accuracy of the SD method. Meanwhile, the SSD method is found to be very efficient in terms of computational cost. This novel sliding-mesh interface method is very suitable for parallel processing with domain decomposition. It can be applied to a wide range of problems, such as the hydrodynamics of marine propellers, the aerodynamics of rotorcraft, wind turbines, and oscillating wing power generators, etc.

  1. Computational Approach To Understanding Autism Spectrum Disorders

    Włodzisław Duch

    2012-01-01

    Full Text Available Every year the prevalence of Autism Spectrum of Disorders (ASD is rising. Is there a unifying mechanism of various ASD cases at the genetic, molecular, cellular or systems level? The hypothesis advanced in this paper is focused on neural dysfunctions that lead to problems with attention in autistic people. Simulations of attractor neural networks performing cognitive functions help to assess system long-term neurodynamics. The Fuzzy Symbolic Dynamics (FSD technique is used for the visualization of attractors in the semantic layer of the neural model of reading. Large-scale simulations of brain structures characterized by a high order of complexity requires enormous computational power, especially if biologically motivated neuron models are used to investigate the influence of cellular structure dysfunctions on the network dynamics. Such simulations have to be implemented on computer clusters in a grid-based architectures

  2. Music Genre Classification Systems - A Computational Approach

    Ahrendt, Peter; Hansen, Lars Kai

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular...

  3. A fourth-order accurate curvature computation in a level set framework for two-phase flows subjected to surface tension forces

    Coquerelle, Mathieu; Glockner, Stéphane

    2016-01-01

    We propose an accurate and robust fourth-order curvature extension algorithm in a level set framework for the transport of the interface. The method is based on the Continuum Surface Force approach, and is shown to efficiently calculate surface tension forces for two-phase flows. In this framework, the accuracy of the algorithms mostly relies on the precise computation of the surface curvature which we propose to accomplish using a two-step algorithm: first by computing a reliable fourth-order curvature estimation from the level set function, and second by extending this curvature rigorously in the vicinity of the surface, following the Closest Point principle. The algorithm is easy to implement and to integrate into existing solvers, and can easily be extended to 3D. We propose a detailed analysis of the geometrical and numerical criteria responsible for the appearance of spurious currents, a well known phenomenon observed in various numerical frameworks. We study the effectiveness of this novel numerical method on state-of-the-art test cases showing that the resulting curvature estimate significantly reduces parasitic currents. In addition, the proposed approach converges to fourth-order regarding spatial discretization, which is two orders of magnitude better than algorithms currently available. We also show the necessity for high-order transport methods for the surface by studying the case of the 2D advection of a column at equilibrium thereby proving the robustness of the proposed approach. The algorithm is further validated on more complex test cases such as a rising bubble.

  4. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic age

  5. A new approach to accurate validation of remote sensing retrieval of evapotranspiration based on data fusion

    C. Sun

    2010-03-01

    obtained from RS retrieval, which was in accordance with previous studies (Jamieson, 1982; Dugas and Ainsworth, 1985; Benson et al., 1992; Pereira and Nova, 1992.

    After the data fusion, the correlation (R2=0.8516 between the monthly runoff obtained from the simulation based on ET retrieval and the observed data was higher than that (R2=0.8411 between the data obtained from the PM-based ET simulation and the observed data. As for the RMSE, the result (RMSE=26.0860 between the simulated runoff based on ET retrieval and the observed data was also superior to the result (RMSE=35.71904 between the simulated runoff obtained with PM-based ET and the observed data. As for the MBE parameter, the result (MBE=−8.6578 for the RS retrieval method was obviously better than that (MBE=−22.7313 for the PM-based method. The comparison of them showed that the RS retrieval had better adaptivity and higher accuracy than the PM-based method, and the new approach based on data fusion and the distributed hydrological model was feasible, reliable and worth being studied further.

  6. SOFT COMPUTING APPROACH FOR NOISY IMAGE RESTORATION

    2000-01-01

    A genetic learning algorithm based fuzzy neural network was proposed for noisy image restoration, which can adaptively find and extract the fuzzy rules contained in noise. It can efficiently remove image noise and preserve the detail image information as much as possible. The experimental results show that the proposed approach is able to performa far better than conventional noise removing techniques.

  7. Approach for Application on Cloud Computing

    Shiv Kumar

    2012-08-01

    Full Text Available A web application is any application using web browser asclient or we can say that it is a dynamic version of a web orapplication server. There are two types of web applicationsbased on orientation:1. A presentation-oriented web application generates interactiveweb pages containing various types of markup language likeHTML, XML etc. and dynamic content in response to requests.2. A service-oriented web application implements the endpointof a web service.Web applications commonly use server-side script like ASP,PHP, etc and client-side script like HTML, JavaScript, etc. todevelop the application. Web applications are used in the fieldof banking sector, insurance sector, marketing, finance, servicesetc.“Cloud computing is a model for enabling convenient, ondemandnetwork access to a shared pool of configurablecomputing resources (e.g., networks, servers, storage,applications, and services that can be rapidly provisioned andreleased with minimal management effort or service providerinteraction.” - U.S. National Institute of Standards andTechnology (NISTA general and simple cloud computing definition is using webapplications and/or server services that you pay to access ratherthan software or hardware that you buy and install.

  8. A complex network approach to cloud computing

    Travieso, Gonzalo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2015-01-01

    Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the users' tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlain by Erdos-Renyi and Barabasi-Albert topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of two indices: the cost of communication between the user and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter index, the ER topology provides better performance than the BA case for smaller average degrees and opposite behavior for larger average degrees. With respect to the cost, smaller values are found in the BA ...

  9. Computing Greeks for L\\'evy Models: The Fourier Transform Approach

    Federico De Olivera; Ernesto Mordecki

    2014-01-01

    The computation of Greeks for exponential L\\'evy models are usually approached by Malliavin Calculus and other methods, as the Likelihood Ratio and the finite difference method. In this paper we obtain exact formulas for Greeks of European options based on the Lewis formula for the option value. Therefore, it is possible to obtain accurate approximations using Fast Fourier Transform. We will present an exhaustive development of Greeks for Call options. The error is shown for all Greeks in the...

  10. Integrated Computable General Equilibrium (CGE) microsimulation approach

    John Cockburn; Erwin Corong; Caesar Cororaton

    2010-01-01

    Conventionally, the analysis of macro-economic shocks and the analysis of income distribution and poverty require very different methodological techniques and sources of data. Over the last decade however, the natural divide between both approaches has diminished, as evaluating the impact of macro-economic shocks on poverty and income distribution within a CGE framework complemented by household survey data has flourished. This paper focuses on explicitly integrating into a CGE model each hou...

  11. Panel discussion: Innovative approaches to high performance computing

    A large part of research in lattice field theory is carried out via computer simulations. Some research groups use computer clusters readily assembled using off-the-shelf components, while others have been developing dedicated closely coupled massively parallel supercomputers. Pros and cons of these approaches, in particular the affordability and performance of these computers, were discussed. All the options being explored have different specific uses, and it is a good sign for the future that the computer industry is now taking active interest in building special purpose high performance computers

  12. Securing applications in personal computers: the relay race approach.

    Wright, James Michael

    1991-01-01

    Approved for public release; distribution is unlimited This Thesis reviews the increasing need for security in a personal computer (PC) environment and proposes a new approach for securing PC applications at the application layer. The Relay Race Approach extends two standard approaches : data encryption and password access control at the main program level, to the subprogram level by the use of a special parameter, the "Baton" . The applicability of this approach is de...

  13. Unbiased QM/MM approach using accurate multipoles from a linear scaling DFT calculation with a systematic basis set

    Mohr, Stephan; Genovese, Luigi; Ratcliff, Laura; Masella, Michel

    The quantum mechanics/molecular mechanis (QM/MM) method is a popular approach that allows to perform atomistic simulations using different levels of accuracy. Since only the essential part of the simulation domain is treated using a highly precise (but also expensive) QM method, whereas the remaining parts are handled using a less accurate level of theory, this approach allows to considerably extend the total system size that can be simulated without a notable loss of accuracy. In order to couple the QM and MM regions we use an approximation of the electrostatic potential based on a multipole expansion. The multipoles of the QM region are determined based on the results of a linear scaling Density Functional Theory (DFT) calculation using a set of adaptive, localized basis functions, as implemented within the BigDFT software package. As this determination comes at virtually no extra cost compared to the QM calculation, the coupling between QM and MM region can be done very efficiently. In this presentation I will demonstrate the accuracy of both the linear scaling DFT approach itself as well as of the approximation of the electrostatic potential based on the multipole expansion, and show some first QM/MM applications using the aforementioned approach.

  14. Fractal approach to computer-analytical modelling of tree crown

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  15. Computing material fronts with a Lagrange-Projection approach

    Chalons, Christophe

    2010-01-01

    This paper reports investigations on the computation of material fronts in multi-fluid models using a Lagrange-Projection approach. Various forms of the Projection step are considered. Particular attention is paid to minimization of conservation errors.

  16. Q-P Wave traveltime computation by an iterative approach

    Ma, Xuxin

    2013-01-01

    In this work, we present a new approach to compute anisotropic traveltime based on solving successively elliptical isotropic traveltimes. The method shows good accuracy and is very simple to implement.

  17. A genetic and computational approach to structurally classify neuronal types

    Sümbül, Uygar; Song, Sen; McCulloch, Kyle; Becker, Michael; Lin, Bin; Sanes, Joshua R.; Masland, Richard H.; Seung, H. Sebastian

    2014-01-01

    The importance of cell types in understanding brain function is widely appreciated but only a tiny fraction of neuronal diversity has been catalogued. Here, we exploit recent progress in genetic definition of cell types in an objective structural approach to neuronal classification. The approach is based on highly accurate quantification of dendritic arbor position relative to neurites of other cells. We test the method on a population of 363 mouse retinal ganglion cells. For each cell, we de...

  18. Assessment of the extended Koopmans' theorem for the chemical reactivity: Accurate computations of chemical potentials, chemical hardnesses, and electrophilicity indices.

    Yildiz, Dilan; Bozkaya, Uğur

    2016-01-30

    The extended Koopmans' theorem (EKT) provides a straightforward way to compute ionization potentials and electron affinities from any level of theory. Although it is widely applied to ionization potentials, the EKT approach has not been applied to evaluation of the chemical reactivity. We present the first benchmarking study to investigate the performance of the EKT methods for predictions of chemical potentials (μ) (hence electronegativities), chemical hardnesses (η), and electrophilicity indices (ω). We assess the performance of the EKT approaches for post-Hartree-Fock methods, such as Møller-Plesset perturbation theory, the coupled-electron pair theory, and their orbital-optimized counterparts for the evaluation of the chemical reactivity. Especially, results of the orbital-optimized coupled-electron pair theory method (with the aug-cc-pVQZ basis set) for predictions of the chemical reactivity are very promising; the corresponding mean absolute errors are 0.16, 0.28, and 0.09 eV for μ, η, and ω, respectively. PMID:26458329

  19. Soft computing approaches to uncertainty propagation in environmental risk mangement

    Kumar, Vikas

    2008-01-01

    Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like pro...

  20. A computational approach to chemical etiologies of diabetes

    Audouze, Karine Marie Laure; Brunak, Søren; Grandjean, Philippe

    2013-01-01

    Computational meta-analysis can link environmental chemicals to genes and proteins involved in human diseases, thereby elucidating possible etiologies and pathogeneses of non-communicable diseases. We used an integrated computational systems biology approach to examine possible pathogenetic......, and offers thus promising guidance for future research in regard to the etiology and pathogenesis of complex diseases....

  1. General approaches in ensemble quantum computing

    V Vimalan; N Chandrakumar

    2008-01-01

    We have developed methodology for NMR quantum computing focusing on enhancing the efficiency of initialization, of logic gate implementation and of readout. Our general strategy involves the application of rotating frame pulse sequences to prepare pseudopure states and to perform logic operations. We demonstrate experimentally our methodology for both homonuclear and heteronuclear spin ensembles. On model two-spin systems, the initialization time of one of our sequences is three-fourths (in the heteronuclear case) or one-fourth (in the homonuclear case), of the typical pulsed free precession sequences, attaining the same initialization efficiency. We have implemented the logical SWAP operation in homonuclear AMX spin systems using selective isotropic mixing, reducing the duration taken to a third compared to the standard re-focused INEPT-type sequence. We introduce the 1D version for readout of the rotating frame SWAP operation, in an attempt to reduce readout time. We further demonstrate the Hadamard mode of 1D SWAP, which offers 2N-fold reduction in experiment time for a system with -working bits, attaining the same sensitivity as the standard 1D version.

  2. Multivariate analysis: A statistical approach for computations

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  3. Accurate quantification of endogenous androgenic steroids in cattle's meat by gas chromatography mass spectrometry using a surrogate analyte approach

    Determination of endogenous steroids in complex matrices such as cattle's meat is a challenging task. Since endogenous steroids always exist in animal tissues, no analyte-free matrices for constructing the standard calibration line will be available, which is crucial for accurate quantification specially at trace level. Although some methods have been proposed to solve the problem, none has offered a complete solution. To this aim, a new quantification strategy was developed in this study, which is named 'surrogate analyte approach' and is based on using isotope-labeled standards instead of natural form of endogenous steroids for preparing the calibration line. In comparison with the other methods, which are currently in use for the quantitation of endogenous steroids, this approach provides improved simplicity and speed for analysis on a routine basis. The accuracy of this method is better than other methods at low concentration and comparable to the standard addition at medium and high concentrations. The method was also found to be valid according to the ICH criteria for bioanalytical methods. The developed method could be a promising approach in the field of compounds residue analysis

  4. Mutations that Cause Human Disease: A Computational/Experimental Approach

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which

  5. Numerical Methods for Stochastic Computations A Spectral Method Approach

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  6. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. PMID:22658682

  7. Convergence Analysis of a Class of Computational Intelligence Approaches

    Junfeng Chen

    2013-01-01

    Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.

  8. Mobile Cloud Computing: A Review on Smartphone Augmentation Approaches

    Abolfazli, Saeid; Gani, Abdullah

    2012-01-01

    Smartphones have recently gained significant popularity in heavy mobile processing while users are increasing their expectations toward rich computing experience. However, resource limitations and current mobile computing advancements hinder this vision. Therefore, resource-intensive application execution remains a challenging task in mobile computing that necessitates device augmentation. In this article, smartphone augmentation approaches are reviewed and classified in two main groups, namely hardware and software. Generating high-end hardware is a subset of hardware augmentation approaches, whereas conserving local resource and reducing resource requirements approaches are grouped under software augmentation methods. Our study advocates that consreving smartphones' native resources, which is mainly done via task offloading, is more appropriate for already-developed applications than new ones, due to costly re-development process. Cloud computing has recently obtained momentous ground as one of the major co...

  9. What is intrinsic motivation? A typology of computational approaches

    Pierre-Yves Oudeyer

    2009-11-01

    Full Text Available Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics.

  10. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  11. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...... thinking. We present two main theses on which the subject is based, and we present the included knowledge areas and didactical design principles. Finally we summarize the status and future plans for the subject and related development projects....

  12. An approach to computing direction relations between separated object groups

    H. Yan

    2013-06-01

    Full Text Available Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on Gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups; and then it constructs the Voronoi Diagram between the two groups using the triangular network; after this, the normal of each Vornoi edge is calculated, and the quantitative expression of the direction relations is constructed; finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  13. Computational experiment approach to advanced secondary mathematics curriculum

    Abramovich, Sergei

    2014-01-01

    This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...

  14. An approach to computing direction relations between separated object groups

    Yan, H.; Wang, Z.; Li, J.

    2013-09-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  15. A tale of three bio-inspired computational approaches

    Schaffer, J. David

    2014-05-01

    I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.

  16. Distributed computer-controlled systems: the DEAR-COTS approach

    Veríssimo, P; A. Casimiro; L. M. Pinho; vasques, f; Rodrigues, L.; E. Tovar

    2000-01-01

    This paper proposes a new architecture targeting real-time and reliable Distributed Computer-Controlled Systems (DCCS). This architecture provides a structured approach for the integration of soft and/or hard real-time applications with Commercial O -The-Shelf (COTS) components. The Timely Computing Base model is used as the reference model to deal with the heterogeneity of system components with respect to guaranteeing the timeliness of applications. The reliability and ava...

  17. A GPU-Computing Approach to Solar Stokes Profile Inversion

    Harker, Brian J.; Mighell, Kenneth J.

    2012-01-01

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS (GENEtic Stokes Inversion Strategy), employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units GPUs, along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specif...

  18. Computational approaches for the study of biotechnologically-relevant macromolecules.

    Filippi, G.

    2016-01-01

    Computer-based techniques have become especially important in molecular biology, since they often represent the only viable way to understand some phenomena at atomic and molecular level. The complexity of biological systems, which usually needs to be analyzed with different levels of accuracy, requires the application of different approaches. Computational methodologies applied to biotechnologies allow a molecular comprehension of biological systems at different levels of depth. Quantum mech...

  19. Computational biomechanics for medicine new approaches and new applications

    Miller, Karol; Wittek, Adam; Nielsen, Poul

    2015-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologiesand advancements. Thisvolumecomprises twelve of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, France, Spain and Switzerland. Some of the interesting topics discussed are:real-time simulations; growth and remodelling of soft tissues; inverse and meshless solutions; medical image analysis; and patient-specific solid mechanics simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  20. A computational approach to evaluate the androgenic affinity of iprodione, procymidone, vinclozolin and their metabolites.

    Galli, Corrado Lodovico; Sensi, Cristina; Fumagalli, Amos; Parravicini, Chiara; Marinovich, Marina; Eberini, Ivano

    2014-01-01

    Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs) and their metabolites towards the ligand binding domain (LBD) of the androgen receptor (AR) in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites) were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three). This evidence suggests that, in order not to over-/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy. PMID:25111804

  1. A Computationally Based Approach to Homogenizing Advanced Alloys

    Jablonski, P D; Cowen, C J

    2011-02-27

    We have developed a computationally based approach to optimizing the homogenization heat treatment of complex alloys. The Scheil module within the Thermo-Calc software is used to predict the as-cast segregation present within alloys, and DICTRA (Diffusion Controlled TRAnsformations) is used to model the homogenization kinetics as a function of time, temperature and microstructural scale. We will discuss this approach as it is applied to both Ni based superalloys as well as the more complex (computationally) case of alloys that solidify with more than one matrix phase as a result of segregation. Such is the case typically observed in martensitic steels. With these alloys it is doubly important to homogenize them correctly, especially at the laboratory scale, since they are austenitic at high temperature and thus constituent elements will diffuse slowly. The computationally designed heat treatment and the subsequent verification real castings are presented.

  2. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods. PMID:25068299

  3. An engineering based approach for hydraulic computations in river flows

    Di Francesco, S.; Biscarini, C.; Pierleoni, A.; Manciola, P.

    2016-06-01

    This paper presents an engineering based approach for hydraulic risk evaluation. The aim of the research is to identify a criteria for the choice of the simplest and appropriate model to use in different scenarios varying the characteristics of main river channel. The complete flow field, generally expressed in terms of pressure, velocities, accelerations can be described through a three dimensional approach that consider all the flow properties varying in all directions. In many practical applications for river flow studies, however, the greatest changes occur only in two dimensions or even only in one. In these cases the use of simplified approaches can lead to accurate results, with easy to build and faster simulations. The study has been conducted taking in account a dimensionless parameter of channels (ratio of curvature radius and width of the channel (R/B).

  4. A GPU-Computing Approach to Solar Stokes Profile Inversion

    Harker, Brian J

    2012-01-01

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS (GENEtic Stokes Inversion Strategy), employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units GPUs, along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disc maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel genetic algorithm with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disc vector ma...

  5. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes.

    Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad

    2012-10-30

    Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD(50) with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure-toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model. PMID:22959133

  6. An ONIOM study of the Bergman reaction: a computationally efficient and accurate method for modeling the enediyne anticancer antibiotics

    Feldgus, Steven; Shields, George C.

    2001-10-01

    The Bergman cyclization of large polycyclic enediyne systems that mimic the cores of the enediyne anticancer antibiotics was studied using the ONIOM hybrid method. Tests on small enediynes show that ONIOM can accurately match experimental data. The effect of the triggering reaction in the natural products is investigated, and we support the argument that it is strain effects that lower the cyclization barrier. The barrier for the triggered molecule is very low, leading to a reasonable half-life at biological temperatures. No evidence is found that would suggest a concerted cyclization/H-atom abstraction mechanism is necessary for DNA cleavage.

  7. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  8. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    Keshavarz, Mohammad Hossein, E-mail: mhkeshavarz@mut-es.ac.ir [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115, Isfahan, Islamic Republic of Iran (Iran, Islamic Republic of); Gharagheizi, Farhad [Department of Chemical Engineering, Buinzahra Branch, Islamic Azad University, Buinzahra, Islamic Republic of Iran (Iran, Islamic Republic of); Shokrolahi, Arash; Zakinejad, Sajjad [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115, Isfahan, Islamic Republic of Iran (Iran, Islamic Republic of)

    2012-10-30

    Highlights: Black-Right-Pointing-Pointer A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. Black-Right-Pointing-Pointer There is no need to use QSAR and QSTR methods, which are based on computer codes. Black-Right-Pointing-Pointer The predicted results of 58 compounds are more reliable than those predicted by QSTR method. Black-Right-Pointing-Pointer The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD{sub 50} with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure-toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  9. Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact

    Abadjiev Valentin

    2014-09-01

    Full Text Available The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters.

  10. Computer vision approaches to medical image analysis. Revised papers

    This book constitutes the thoroughly refereed post proceedings of the international workshop Computer Vision Approaches to Medical Image Analysis, CVAMIA 2006, held in Graz, Austria in May 2006 as a satellite event of the 9th European Conference on Computer Vision, EECV 2006. The 10 revised full papers and 11 revised poster papers presented together with 1 invited talk were carefully reviewed and selected from 38 submissions. The papers are organized in topical sections on clinical applications, image registration, image segmentation and analysis, and the poster session. (orig.)