WorldWideScience

Sample records for accurate computational approach

  1. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.

  2. Accurate paleointensities - the multi-method approach

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  3. INDUS - a composition-based approach for rapid and accurate taxonomic classification of metagenomic sequences

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Reddy, Rachamalla Maheedhar; Reddy, Chennareddy Venkata Siva Kumar; Singh, Nitin Kumar; Sharmila S Mande

    2011-01-01

    Background Taxonomic classification of metagenomic sequences is the first step in metagenomic analysis. Existing taxonomic classification approaches are of two types, similarity-based and composition-based. Similarity-based approaches, though accurate and specific, are extremely slow. Since, metagenomic projects generate millions of sequences, adopting similarity-based approaches becomes virtually infeasible for research groups having modest computational resources. In this study, we present ...

  4. Combinatorial Approaches to Accurate Identification of Orthologous Genes

    Shi, Guanqun

    2011-01-01

    The accurate identification of orthologous genes across different species is a critical and challenging problem in comparative genomics and has a wide spectrum of biological applications including gene function inference, evolutionary studies and systems biology. During the past several years, many methods have been proposed for ortholog assignment based on sequence similarity, phylogenetic approaches, synteny information, and genome rearrangement. Although these methods share many commonly a...

  5. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  6. An Integrative Approach to Accurate Vehicle Logo Detection

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  7. Computational approaches to vision

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  8. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  9. Multidetector row computed tomography may accurately estimate plaque vulnerability: does MDCT accurately estimate plaque vulnerability? (Pro).

    Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa

    2011-01-01

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal ECG and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. PMID:21532180

  10. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  11. Compiler for Fast, Accurate Mathematical Computing on Integer Processors Project

    National Aeronautics and Space Administration — The proposers will develop a computer language compiler to enable inexpensive, low-power, integer-only processors to carry our mathematically-intensive...

  12. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  13. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab

  14. Accurate FOFEM computations and ray tracing in particle optics

    Lencová, Bohumila

    Oxford: Institute of Physics, 2004, s. 169-172. ISBN 0-750-30967-9. [EMAG 2003 Electron Microscopy and Analysis Group Conference. Oxford (GB), 03.09.2003-05.09.2003] R&D Projects: GA ČR GA202/03/1575 Keywords : finite element method * magnetic electron lenses * accuracy of computation Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  15. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  16. On accurate computations of bound state properties in three- and four-electron atomic systems

    Frolov, Alexei M

    2016-01-01

    Results of accurate computations of bound states in three- and four-electron atomic systems are discussed. Bound state properties of the four-electron lithium ion Li$^{-}$ in its ground $2^{2}S-$state are determined from the results of accurate, variational computations. We also consider a closely related problem of accurate numerical evaluation of the half-life of the beryllium-7 isotope. This problem is of paramount importance for modern radiochemistry.

  17. A programming approach to computability

    Kfoury, A J; Arbib, Michael A

    1982-01-01

    Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...

  18. Elliptic curves a computational approach

    Schmitt, Susanne; Pethö, Attila

    2003-01-01

    The basics of the theory of elliptic curves should be known to everybody, be he (or she) a mathematician or a computer scientist. Especially everybody concerned with cryptography should know the elements of this theory. The purpose of the present textbook is to give an elementary introduction to elliptic curves. Since this branch of number theory is particularly accessible to computer-assisted calculations, the authors make use of it by approaching the theory under a computational point of view. Specifically, the computer-algebra package SIMATH can be applied on several occasions. However, the book can be read also by those not interested in any computations. Of course, the theory of elliptic curves is very comprehensive and becomes correspondingly sophisticated. That is why the authors made a choice of the topics treated. Topics covered include the determination of torsion groups, computations regarding the Mordell-Weil group, height calculations, S-integral points. The contents is kept as elementary as poss...

  19. A semi-empirical approach to accurate standard enthalpies of formation for solid hydrides

    Klaveness, A. [Department of Chemistry, University of Oslo, P.O. Box 1033, Blindern, N-0315 Oslo (Norway)], E-mail: arnekla@kjemi.uio.no; Fjellvag, H.; Kjekshus, A.; Ravindran, P. [Department of Chemistry, University of Oslo, P.O. Box 1033, Blindern, N-0315 Oslo (Norway); Swang, O. [SINTEF Materials and Chemistry, P.O. Box 124, Blindern, N-0314 Oslo (Norway)

    2009-02-05

    A semi-empirical method for estimation of enthalpies of formation of solid hydrides is proposed. The method is named Ionic for short. By combining experimentally known enthalpies of formation for simple hydrides and reaction energies computed using band-structure density functional theory (DFT) methods, startling accurate results can be achieved. The approach relies on cancellation of errors when comparing DFT energies for systems with similar electronic structures. The influence of zero-point energies, polaritons, and vibrational excitations on the results has been examined and found to be minor.

  20. Computational approaches to energy materials

    Catlow, Richard; Walsh, Aron

    2013-01-01

    The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process.   Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the

  1. Development of highly accurate approximate scheme for computing the charge transfer integral.

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117

  2. Development of highly accurate approximate scheme for computing the charge transfer integral

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature

  3. Accurate 3-D finite difference computation of traveltimes in strongly heterogeneous media

    Noble, M.; Gesret, A.; Belayouni, N.

    2014-12-01

    Seismic traveltimes and their spatial derivatives are the basis of many imaging methods such as pre-stack depth migration and tomography. A common approach to compute these quantities is to solve the eikonal equation with a finite-difference scheme. If many recently published algorithms for resolving the eikonal equation do now yield fairly accurate traveltimes for most applications, the spatial derivatives of traveltimes remain very approximate. To address this accuracy issue, we develop a new hybrid eikonal solver that combines a spherical approximation when close to the source and a plane wave approximation when far away. This algorithm reproduces properly the spherical behaviour of wave fronts in the vicinity of the source. We implement a combination of 16 local operators that enables us to handle velocity models with sharp vertical and horizontal velocity contrasts. We associate to these local operators a global fast sweeping method to take into account all possible directions of wave propagation. Our formulation allows us to introduce a variable grid spacing in all three directions of space. We demonstrate the efficiency of this algorithm in terms of computational time and the gain in accuracy of the computed traveltimes and their derivatives on several numerical examples.

  4. Proposed Enhanced Object Recognition Approach for Accurate Bionic Eyes

    Mohammad Shkoukani

    2012-07-01

    Full Text Available AI has played a huge role in image formation and recognition, but all built on the supervised and unsupervised learning algorithms the learning agents follow. Neural networks have also a role in bionic eyes integration but it is not discussed thoroughly in this paper. The chip to be implanted, which is a robotic device that applies methods developed in machine learning, consists of large scale algorithms for feature learning to construct classifiers for object detection and recognition, to input in the chip system. The challenge however is in identifying a complex image, which may require combined processes of learning features algorithms. In this paper an experimented approaches are stated for individual case of concentration of objects to obtain a high recognition outcome. Each approach may influence one angle, and a suggested non-experimented approach may give a better visual aid for bionic recognition and identification, using more learning and testing methods. The paper discusses the different approached of kernel and convolutional methods to classify objects, in addition to a proposed model to extract a maximized optimization of object formation and recognition. The proposed model combines variety of algorithms that have been experimented in differed related works and uses different learning approaches to handle large datasets in training.

  5. A computational methodology for formulating gasoline surrogate fuels with accurate physical and chemical kinetic properties

    Ahmed, Ahfaz

    2015-03-01

    Gasoline is the most widely used fuel for light duty automobile transportation, but its molecular complexity makes it intractable to experimentally and computationally study the fundamental combustion properties. Therefore, surrogate fuels with a simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating surrogates for FACE (fuels for advanced combustion engines) gasolines A and C by combining regression modeling with physical and chemical kinetics simulations. The computational methodology integrates simulation tools executed across different software platforms. Initially, the palette of surrogate species and carbon types for the target fuels were determined from a detailed hydrocarbon analysis (DHA). A regression algorithm implemented in MATLAB was linked to REFPROP for simulation of distillation curves and calculation of physical properties of surrogate compositions. The MATLAB code generates a surrogate composition at each iteration, which is then used to automatically generate CHEMKIN input files that are submitted to homogeneous batch reactor simulations for prediction of research octane number (RON). The regression algorithm determines the optimal surrogate composition to match the fuel properties of FACE A and C gasoline, specifically hydrogen/carbon (H/C) ratio, density, distillation characteristics, carbon types, and RON. The optimal surrogate fuel compositions obtained using the present computational approach was compared to the real fuel properties, as well as with surrogate compositions available in the literature. Experiments were conducted within a Cooperative Fuels Research (CFR) engine operating under controlled autoignition (CAI) mode to compare the formulated surrogates against the real fuels. Carbon monoxide measurements indicated that the proposed surrogates

  6. Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights

    Hale, Nicholas

    2013-03-06

    An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton\\'s root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.

  7. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC

    2009-06-19

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.

  8. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  9. GRID COMPUTING AND CHECKPOINT APPROACH

    Pankaj gupta

    2011-05-01

    Full Text Available Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation or problem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. In this paper we investigate the different techniques of fault tolerance which are used in many real time distributed systems. The main focus is on types of fault occurring in the system, fault detection techniques and the recovery techniques used. A fault can occur due to link failure, resource failure or by any other reason is to be tolerated for working the system smoothly and accurately. These faults can be detected and recovered by many techniques used accordingly. An appropriate fault detector can avoid loss due to system crash and reliable fault tolerance technique can save from system failure. This paper provides how these methods are applied to detect and tolerate faults from various Real Time Distributed Systems. The advantages of utilizing the check pointing functionality are obvious; however so far the Grid community has notdeveloped a widely accepted standard that would allow the Gridenvironment to consciously utilize low level check pointing packages.Therefore, such a standard named Grid Check pointing Architecture isbeing designed. The fault tolerance mechanism used here sets the jobcheckpoints based on the resource failure rate. If resource failureoccurs, the job is restarted from its last successful state using acheckpoint file from another grid resource. A critical aspect for anautomatic recovery is the availability of checkpoint files. A strategy to increase the availability of checkpoints is replication. Grid is a form distributed computing mainly to virtualizes and utilize geographically distributed idle resources. A grid is a distributed computational and storage environment often composed of

  10. Computational Approaches to Vestibular Research

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  11. Computer Networks A Systems Approach

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  12. Efficient and accurate P-value computation for Position Weight Matrices

    Varré Jean-Stéphane

    2007-12-01

    Full Text Available Abstract Background Position Weight Matrices (PWMs are probabilistic representations of signals in sequences. They are widely used to model approximate patterns in DNA or in protein sequences. The usage of PWMs needs as a prerequisite to knowing the statistical significance of a word according to its score. This is done by defining the P-value of a score, which is the probability that the background model can achieve a score larger than or equal to the observed value. This gives rise to the following problem: Given a P-value, find the corresponding score threshold. Existing methods rely on dynamic programming or probability generating functions. For many examples of PWMs, they fail to give accurate results in a reasonable amount of time. Results The contribution of this paper is two fold. First, we study the theoretical complexity of the problem, and we prove that it is NP-hard. Then, we describe a novel algorithm that solves the P-value problem efficiently. The main idea is to use a series of discretized score distributions that improves the final result step by step until some convergence criterion is met. Moreover, the algorithm is capable of calculating the exact P-value without any error, even for matrices with non-integer coefficient values. The same approach is also used to devise an accurate algorithm for the reverse problem: finding the P-value for a given score. Both methods are implemented in a software called TFM-PVALUE, that is freely available. Conclusion We have tested TFM-PVALUE on a large set of PWMs representing transcription factor binding sites. Experimental results show that it achieves better performance in terms of computational time and precision than existing tools.

  13. Can computer simulators accurately represent the pathophysiology of individual COPD patients?

    Wang, Wenfei; Das, Anup; Ali, Tayyba; Cole, Oanna; Chikhani, Marc; Haque, Mainul; Hardman, Jonathan G; Bates, Declan G

    2014-01-01

    Background Computer simulation models could play a key role in developing novel therapeutic strategies for patients with chronic obstructive pulmonary disease (COPD) if they can be shown to accurately represent the pathophysiological characteristics of individual patients. Methods We evaluated the capability of a computational simulator to reproduce the heterogeneous effects of COPD on alveolar mechanics as captured in a number of different patient datasets. Results Our results show that accu...

  14. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design. To develop a physical model for extremely scaled DG MOSFETs, the drain current in the channel must be accurately determined under the application of drain and gate voltages. However, modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent, quantum computations, ...). Therefore, new methods and techniques are required to overcome these constraints. In this paper, a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs. The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design. The approach is general and thus is suitable for any type of nanoscale structure investigation problems in the nanotechnology industry. (semiconductor devices)

  15. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    F. Djeffal; A. Ferdi; M. Chahdi

    2012-01-01

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design.To develop a physical model for extremely scaled DG MOSFETs,the drain current in the channel must be accurately determined under the application of drain and gate voltages.However,modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent,quantum computations ).Therefore,new methods and techniques are required to overcome these constraints.In this paper,a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs.The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design.The approach is general and thus is suitable for any type ofnanoscale structure investigation problems in the nanotechnology industry.

  16. Petascale self-consistent electromagnetic computations using scalable and accurate algorithms for complex structures

    As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly

  17. Aeroacoustic Flow Phenomena Accurately Captured by New Computational Fluid Dynamics Method

    Blech, Richard A.

    2002-01-01

    One of the challenges in the computational fluid dynamics area is the accurate calculation of aeroacoustic phenomena, especially in the presence of shock waves. One such phenomenon is "transonic resonance," where an unsteady shock wave at the throat of a convergent-divergent nozzle results in the emission of acoustic tones. The space-time Conservation-Element and Solution-Element (CE/SE) method developed at the NASA Glenn Research Center can faithfully capture the shock waves, their unsteady motion, and the generated acoustic tones. The CE/SE method is a revolutionary new approach to the numerical modeling of physical phenomena where features with steep gradients (e.g., shock waves, phase transition, etc.) must coexist with those having weaker variations. The CE/SE method does not require the complex interpolation procedures (that allow for the possibility of a shock between grid cells) used by many other methods to transfer information between grid cells. These interpolation procedures can add too much numerical dissipation to the solution process. Thus, while shocks are resolved, weaker waves, such as acoustic waves, are washed out.

  18. A fast and accurate method for computing the Sunyaev-Zeldovich signal of hot galaxy clusters

    Chluba, Jens; Sazonov, Sergey; Nelson, Kaylea

    2012-01-01

    New generation ground and space-based CMB experiments have ushered in discoveries of massive galaxy clusters via the Sunyaev-Zeldovich (SZ) effect, providing a new window for studying cluster astrophysics and cosmology. Many of the newly discovered, SZ-selected clusters contain hot intracluster plasma (kTe > 10 keV) and exhibit disturbed morphology, indicative of frequent mergers with large peculiar velocity (v > 1000 km s^{-1}). It is well-known that for the interpretation of the SZ signal from hot, moving galaxy clusters, relativistic corrections must be taken into account, and in this work, we present a fast and accurate method for computing these effects. Our approach is based on an alternative derivation of the Boltzmann collision term which provides new physical insight into the sources of different kinematic corrections in the scattering problem. By explicitly imposing Lorentz-invariance of the scattering optical depth, we also show that the kinematic corrections to the SZ intensity signal found in thi...

  19. Computational Approaches to Vestibular Research

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  20. Computational Approaches to Vestibular Research

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  1. Computational Approaches to Vestibular Research

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  2. GRID COMPUTING AND CHECKPOINT APPROACH

    Pankaj gupta

    2011-01-01

    Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation or problem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. In this paper we investigate the different techniques of fault tolerance which are used in many real time distributed systems. The main focus is on types of fault occu...

  3. Fuzzy multiple linear regression: A computational approach

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  4. Computational approach to Riemann surfaces

    Klein, Christian

    2011-01-01

    This volume offers a well-structured overview of existent computational approaches to Riemann surfaces and those currently in development. The authors of the contributions represent the groups providing publically available numerical codes in this field. Thus this volume illustrates which software tools are available and how they can be used in practice. In addition examples for solutions to partial differential equations and in surface theory are presented. The intended audience of this book is twofold. It can be used as a textbook for a graduate course in numerics of Riemann surfaces, in which case the standard undergraduate background, i.e., calculus and linear algebra, is required. In particular, no knowledge of the theory of Riemann surfaces is expected; the necessary background in this theory is contained in the Introduction chapter. At the same time, this book is also intended for specialists in geometry and mathematical physics applying the theory of Riemann surfaces in their research. It is the first...

  5. Computer-based personality judgments are more accurate than those made by humans.

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  6. Accurate measurement of surface areas of anatomical structures by computer-assisted triangulation of computed tomography images

    There is a need for accurate surface area measurement of internal anatomical structures in order to define light dosimetry in adjunctive intraoperative photodynamic therapy (AIOPDT). The authors investigated whether computer-assisted triangulation of serial sections generated by computed tomography (CT) scanning can give an accurate assessment of the surface area of the walls of the true pelvis after anterior resection and before colorectal anastomosis. They show that the technique of paper density tessellation is an acceptable method of measuring the surface areas of phantom objects, with a maximum error of 0.5%, and is used as the gold standard. Computer-assisted triangulation of CT images of standard geometric objects and accurately-constructed pelvic phantoms gives a surface area assessment with a maximum error of 2.5% compared with the gold standard. The CT images of 20 patients' pelves have been analysed by computer-assisted triangulation and this shows the surface area of the walls varies from 143 cm2 to 392 cm2. (Author)

  7. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants. PMID:24216719

  8. Performance assessment of a fast and accurate scalar optical diffraction field computation algorithm

    Bora Esmer, G.

    2013-03-01

    Dynamic holographic reconstructions can be obtained by employing digital holographic video displays which are pixelated devices. In practice, spatial light modulators (SLMs) are used in such purposes. The pixelated structure of SLMs can affect quality of the reconstructed objects. Hence, to obtain better reconstructions, pixelated structure of SLMs has to be taken into consideration. Rapid calculation of the diffraction field which is emitted by the object is just as important as the accuracy of the diffraction field. The presented algorithm is based on computation of Fresnel integral over each pixel area on SLM, thus accurate results are attained. Fast computation of the diffraction field is obtained by scaling of a pre-computed diffraction field compared to standard way of computing the diffraction field. Scaling operation is obtained by using three interpolation methods: rounding to nearest value, linear and cubic interpolation. Although, the rounding to nearest value gives the shortest computation time among all three interpolation methods, it provides the largest normalized mean square error (NMSE). The smallest NMSE can be attained when cubic interpolation is used for computation of diffraction field, but it yields the longest the computation time. Even if NMSE performance of the linear interpolation method is not as good as the cubic interpolation method, the computation time can be reduced by nearly half. [Figure not available: see fulltext.

  9. 3D Navier-Stokes Time Accurate Solutions Using Multipartitioning Parallel Computation Methodology

    Zha, Ge-Cheng

    1998-01-01

    A parallel CFD code solving 3D time accurate Navier-Stokes equations with multipartitioning parallel Methodology is being developed in collaboration with Ohio State University within the Air Vehicle Directorate, at Wright Patterson Air Force Base. The advantage of the multipartitioning parallel method is that the domain decomposition will not introduce domain boundaries for the implicit operators. A ring structure data communication is employed so that the implicit time accurate method can be implemented for multi-processors with the same accuracy as for the single processor. No sub-iteration is needed at the domain boundaries. The code has been validated for some typical unsteady flows, which include Coutte Flow, flow passing a cylinder. The code now is being employed for a large scale time accurate wall jet transient flow computation. 'ne preliminary results are promising. The mesh has been refined to capture more details of the flow field. The mesh refinement computation is in progress and would be difficult to successfully implement without the parallel computation techniques used. A modified version of the code with more efficient inversion of the diagonalized block matrix is currently being tested.

  10. Accurate computation of Stokes flow driven by an open immersed interface

    Li, Yi; Layton, Anita T.

    2012-06-01

    We present numerical methods for computing two-dimensional Stokes flow driven by forces singularly supported along an open, immersed interface. Two second-order accurate methods are developed: one for accurately evaluating boundary integral solutions at a point, and another for computing Stokes solution values on a rectangular mesh. We first describe a method for computing singular or nearly singular integrals, such as a double layer potential due to sources on a curve in the plane, evaluated at a point on or near the curve. To improve accuracy of the numerical quadrature, we add corrections for the errors arising from discretization, which are found by asymptotic analysis. When used to solve the Stokes equations with sources on an open, immersed interface, the method generates second-order approximations, for both the pressure and the velocity, and preserves the jumps in the solutions and their derivatives across the boundary. We then combine the method with a mesh-based solver to yield a hybrid method for computing Stokes solutions at N2 grid points on a rectangular grid. Numerical results are presented which exhibit second-order accuracy. To demonstrate the applicability of the method, we use the method to simulate fluid dynamics induced by the beating motion of a cilium. The method preserves the sharp jumps in the Stokes solution and their derivatives across the immersed boundary. Model results illustrate the distinct hydrodynamic effects generated by the effective stroke and by the recovery stroke of the ciliary beat cycle.

  11. Computer Architecture A Quantitative Approach

    Hennessy, John L

    2011-01-01

    The computing world today is in the middle of a revolution: mobile clients and cloud computing have emerged as the dominant paradigms driving programming and hardware innovation today. The Fifth Edition of Computer Architecture focuses on this dramatic shift, exploring the ways in which software and technology in the cloud are accessed by cell phones, tablets, laptops, and other mobile computing devices. Each chapter includes two real-world examples, one mobile and one datacenter, to illustrate this revolutionary change.Updated to cover the mobile computing revolutionEmphasizes the two most im

  12. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  13. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  14. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ∼40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry. (paper)

  15. Palm computer demonstrates a fast and accurate means of burn data collection.

    Lal, S O; Smith, F W; Davis, J P; Castro, H Y; Smith, D W; Chinkes, D L; Barrow, R E

    2000-01-01

    Manual biomedical data collection and entry of the data into a personal computer is time-consuming and can be prone to errors. The purpose of this study was to compare data entry into a hand-held computer versus hand written data followed by entry of the data into a personal computer. A Palm (3Com Palm IIIx, Santa, Clara, Calif) computer with a custom menu-driven program was used for the entry and retrieval of burn-related variables. These variables were also used to create an identical sheet that was filled in by hand. Identical data were retrieved twice from 110 charts 48 hours apart and then used to create an Excel (Microsoft, Redmond, Wash) spreadsheet. One time data were recorded by the Palm entry method, and the other time the data were handwritten. The method of retrieval was alternated between the Palm system and handwritten system every 10 charts. The total time required to log data and to generate an Excel spreadsheet was recorded and used as a study endpoint. The total time for the Palm method of data collection and downloading to a personal computer was 23% faster than hand recording with the personal computer entry method (P errors were generated with the Palm method.) The Palm is a faster and more accurate means of data collection than a handwritten technique. PMID:11194811

  16. Fast and Accurate Computation of Time-Domain Acoustic Scattering Problems with Exact Nonreflecting Boundary Conditions

    Wang, Li-Lian; Zhao, Xiaodan

    2011-01-01

    This paper is concerned with fast and accurate computation of exterior wave equations truncated via exact circular or spherical nonreflecting boundary conditions (NRBCs, which are known to be nonlocal in both time and space). We first derive analytic expressions for the underlying convolution kernels, which allow for a rapid and accurate evaluation of the convolution with $O(N_t)$ operations over $N_t$ successive time steps. To handle the onlocality in space, we introduce the notion of boundary perturbation, which enables us to handle general bounded scatters by solving a sequence of wave equations in a regular domain. We propose an efficient spectral-Galerkin solver with Newmark's time integration for the truncated wave equation in the regular domain. We also provide ample numerical results to show high-order accuracy of NRBCs and efficiency of the proposed scheme.

  17. Machine Computation; An Algorithmic Approach.

    Gonzalez, Richard F.; McMillan, Claude, Jr.

    Designed for undergraduate social science students, this textbook concentrates on using the computer in a straightforward way to manipulate numbers and variables in order to solve problems. The text is problem oriented and assumes that the student has had little prior experience with either a computer or programing languages. An introduction to…

  18. A novel approach for latent print identification using accurate overlays to prioritize reference prints.

    Gantz, Daniel T; Gantz, Donald T; Walch, Mark A; Roberts, Maria Antonia; Buscaglia, JoAnn

    2014-12-01

    A novel approach to automated fingerprint matching and scoring that produces accurate locally and nonlinearly adjusted overlays of a latent print onto each reference print in a corpus is described. The technology, which addresses challenges inherent to latent prints, provides the latent print examiner with a prioritized ranking of candidate reference prints based on the overlays of the latent onto each candidate print. In addition to supporting current latent print comparison practices, this approach can make it possible to return a greater number of AFIS candidate prints because the ranked overlays provide a substantial starting point for latent-to-reference print comparison. To provide the image information required to create an accurate overlay of a latent print onto a reference print, "Ridge-Specific Markers" (RSMs), which correspond to short continuous segments of a ridge or furrow, are introduced. RSMs are reliably associated with any specific local section of a ridge or a furrow using the geometric information available from the image. Latent prints are commonly fragmentary, with reduced clarity and limited minutiae (i.e., ridge endings and bifurcations). Even in the absence of traditional minutiae, latent prints contain very important information in their ridges that permit automated matching using RSMs. No print orientation or information beyond the RSMs is required to generate the overlays. This automated process is applied to the 88 good quality latent prints in the NIST Special Database (SD) 27. Nonlinear overlays of each latent were produced onto all of the 88 reference prints in the NIST SD27. With fully automated processing, the true mate reference prints were ranked in the first candidate position for 80.7% of the latents tested, and 89.8% of the true mate reference prints ranked in the top ten positions. After manual post-processing of those latents for which the true mate reference print was not ranked first, these frequencies increased to 90

  19. Efficient reconfigurable hardware architecture for accurately computing success probability and data complexity of linear attacks

    Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar;

    2012-01-01

    An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...

  20. Toward detailed prominence seismology - I. Computing accurate 2.5D magnetohydrodynamic equilibria

    Blokland, J W S

    2011-01-01

    Context. Prominence seismology exploits our knowledge of the linear eigenoscillations for representative magnetohydro- dynamic models of filaments. To date, highly idealized models for prominences have been used, especially with respect to the overall magnetic configurations. Aims. We initiate a more systematic survey of filament wave modes, where we consider full multi-dimensional models with twisted magnetic fields representative of the surrounding magnetic flux rope. This requires the ability to compute accurate 2.5 dimensional magnetohydrodynamic equilibria that balance Lorentz forces, gravity, and pressure gradients, while containing density enhancements (static or in motion). Methods. The governing extended Grad-Shafranov equation is discussed, along with an analytic prediction for circular flux ropes for the Shafranov shift of the central magnetic axis due to gravity. Numerical equilibria are computed with a finite element-based code, demonstrating fourth order accuracy on an explicitly known, non-triv...

  1. A Novel MoM Approach for Obtaining Accurate and Efficient Solutions in Optical Rib Waveguide

    YENER, Namık

    2002-01-01

    The optical rib waveguide (ORW) plays an important role in the design of several integrated optical devices. Various methods have been proposed for obtaining the modal field solutions in ORW. However, to the best of our knowledge none of them is capable of providing accurate full-wave benchmark solutions. Here we present a novel MoM approach wherein the modes of a loaded rectangular waveguide are utilized as basis functions and demonstrate that this approach is very efficient and yie...

  2. An institutional approach to computational social creativity

    Corneli, Joseph

    2016-01-01

    Elinor Ostrom's Nobel Memorial Prize-winning work on "the analysis of economic governance, especially the commons" scaffolds an argument for an institutional approach to computational social creativity. Several Ostrom-inspired "creativity design principles" are explored and exemplified to illustrate the computational and institutional structures that are employed in current and potential computational creativity practice.

  3. Soft Computing Approaches To Fault Tolerant Systems

    Neeraj Prakash Srivastava

    2014-05-01

    Full Text Available We present in this paper as an introduction to soft computing techniques for fault tolerant systems and the terminology with different ways of achieving fault tolerance. The paper focuses on the problem of fault tolerance using soft computing techniques. The fundamentals of soft computing approaches and its type with introduction of fault tolerance are discussed. The main objective is to show how to implement soft computing approaches for fault detection, isolation and identification. The paper contains details about soft computing application with an application of wireless sensor network as fault tolerant system.

  4. Antenna arrays a computational approach

    Haupt, Randy L

    2010-01-01

    This book covers a wide range of antenna array topics that are becoming increasingly important in wireless applications, particularly in design and computer modeling. Signal processing and numerical modeling algorithms are explored, and MATLAB computer codes are provided for many of the design examples. Pictures of antenna arrays and components provided by industry and government sources are presented with explanations of how they work. Antenna Arrays is a valuable reference for practicing engineers and scientists in wireless communications, radar, and remote sensing, and an excellent textbook for advanced antenna courses.

  5. A best-estimate plus uncertainty type analysis for computing accurate critical channel power uncertainties

    This paper provides a Critical Channel Power (CCP) uncertainty analysis methodology based on a Monte-Carlo approach. This Monte-Carlo method includes the identification of the sources of uncertainty and the development of error models for the characterization of epistemic and aleatory uncertainties associated with the CCP parameter. Furthermore, the proposed method facilitates a means to use actual operational data leading to improvements over traditional methods (e.g., sensitivity analysis) which assume parametric models that may not accurately capture the possible complex statistical structures in the system input and responses. (author)

  6. Ontological Approach toward Cybersecurity in Cloud Computing

    Takahashi, Takeshi; Kadobayashi, Youki; FUJIWARA, HIROYUKI

    2014-01-01

    Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an...

  7. Infinitesimal symmetries: a computational approach

    This thesis is concerned with computational aspects in the determination of infinitesimal symmetries and Lie-Baecklund transformations of differential equations. Moreover some problems are calculated explicitly. A brief introduction to some concepts in the theory of symmetries and Lie-Baecklund transformations, relevant for this thesis, are given. The mathematical formalism is shortly reviewed. The jet bundle formulation is chosen, in which, by its algebraic nature, objects can be described very precisely. Consequently it is appropriate for implementation. A number of procedures are discussed, which enable to carry through computations with the help of a computer. These computations are very extensive in practice. The Lie algebras of infinitesimal symmetries of a number of differential equations in Mathematical Physics are established and some of their applications are discussed, i.e., Maxwell equations, nonlinear diffusion equation, nonlinear Schroedinger equation, nonlinear Dirac equations and self dual SU(2) Yang-Mills equations. Lie-Baecklund transformations of Burgers' equation, Classical Boussinesq equation and the Massive Thirring Model are determined. Furthermore, nonlocal Lie-Baecklund transformations of the last equation are derived. (orig.)

  8. Simple but accurate GCM-free approach for quantifying anthropogenic climate change

    Lovejoy, S.

    2014-12-01

    We are so used to analysing the climate with the help of giant computer models (GCM's) that it is easy to get the impression that they are indispensable. Yet anthropogenic warming is so large (roughly 0.9oC) that it turns out that it is straightforward to quantify it with more empirically based methodologies that can be readily understood by the layperson. The key is to use the CO2 forcing as a linear surrogate for all the anthropogenic effects from 1880 to the present (implicitly including all effects due to Greenhouse Gases, aerosols and land use changes). To a good approximation, double the economic activity, double the effects. The relationship between the forcing and global mean temperature is extremely linear as can be seen graphically and understood without fancy statistics, [Lovejoy, 2014a] (see the attached figure and http://www.physics.mcgill.ca/~gang/Lovejoy.htm). To an excellent approximation, the deviations from the linear forcing - temperature relation can be interpreted as the natural variability. For example, this direct - yet accurate approach makes it graphically obvious that the "pause" or "hiatus" in the warming since 1998 is simply a natural cooling event that has roughly offset the anthropogenic warming [Lovejoy, 2014b]. Rather than trying to prove that the warming is anthropogenic, with a little extra work (and some nonlinear geophysics theory and pre-industrial multiproxies) we can disprove the competing theory that it is natural. This approach leads to the estimate that the probability of the industrial scale warming being a giant natural fluctuation is ≈0.1%: it can be dismissed. This destroys the last climate skeptic argument - that the models are wrong and the warming is natural. It finally allows for a closure of the debate. In this talk we argue that this new, direct, simple, intuitive approach provides an indispensable tool for communicating - and convincing - the public of both the reality and the amplitude of anthropogenic warming

  9. [The determinant role of an accurate medicosocial approach in the prognosis of pediatric blood diseases].

    Toppet, M

    2005-01-01

    The care of infancy and childhood blood diseases implies a comprehensive medicosocial approach. This is a prerequisite for regular follow-up, for satisfactory compliance to treatment and for optimal patient's quality of life. Different modalities of medicosocial approach have been developed in the pediatric department (firstly in the Hospital Saint Pierre and than in the Children's University Hospital HUDERF). The drastic importance of a recent reform of the increased family allowances is briefly presented. The author underlines the determinant role of an accurate global approach, in which the patient and the family are surrounded by a multidisciplinary team, including social workers. PMID:16454232

  10. COMPUTATIONAL APPROACH TO ORGANIZATIONAL DESIGN

    Alexander Arenas; Roger Guimera; Joan R. Alabart; Hans-Joerg Witt; Albert Diaz-Guilera

    2000-01-01

    The idea of the work is to propose an abstract and simple enough agent-based model for company dynamics, in order to be able to deal computationally and even analytically with the problem of organizational design. Nevertheless, the model should be able to reproduce the essential characteristics of real organizations.The natural way of modeling a company is as being a network where the nodes represent employees and the links between them represent communication lines. In our model, problems ar...

  11. Immune based computer virus detection approaches

    TAN Ying; ZHANG Pengtao

    2013-01-01

    The computer virus is considered one of the most horrifying threats to the security of computer systems worldwide.The rapid development of evasion techniques used in virus causes the signature based computer virus detection techniques to be ineffective.Many novel computer virus detection approaches have been proposed in the past to cope with the ineffectiveness,mainly classified into three categories:static,dynamic and heuristics techniques.As the natural similarities between the biological immune system (BIS),computer security system (CSS),and the artificial immune system (AIS) were all developed as a new prototype in the community of anti-virus research.The immune mechanisms in the BIS provide the opportunities to construct computer virus detection models that are robust and adaptive with the ability to detect unseen viruses.In this paper,a variety of classic computer virus detection approaches were introduced and reviewed based on the background knowledge of the computer virus history.Next,a variety of immune based computer virus detection approaches were also discussed in detail.Promising experimental results suggest that the immune based computer virus detection approaches were able to detect new variants and unseen viruses at lower false positive rates,which have paved a new way for the anti-virus research.

  12. Molecules-in-Molecules: An Extrapolated Fragment-Based Approach for Accurate Calculations on Large Molecules and Materials.

    Mayhall, Nicholas J; Raghavachari, Krishnan

    2011-05-10

    We present a new extrapolated fragment-based approach, termed molecules-in-molecules (MIM), for accurate energy calculations on large molecules. In this method, we use a multilevel partitioning approach coupled with electronic structure studies at multiple levels of theory to provide a hierarchical strategy for systematically improving the computed results. In particular, we use a generalized hybrid energy expression, similar in spirit to that in the popular ONIOM methodology, that can be combined easily with any fragmentation procedure. In the current work, we explore a MIM scheme which first partitions a molecule into nonoverlapping fragments and then recombines the interacting fragments to form overlapping subsystems. By including all interactions with a cheaper level of theory, the MIM approach is shown to significantly reduce the errors arising from a single level fragmentation procedure. We report the implementation of energies and gradients and the initial assessment of the MIM method using both biological and materials systems as test cases. PMID:26610128

  13. Computer Architecture A Quantitative Approach

    Hennessy, John L

    2007-01-01

    The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelis

  14. Accurate definition of brain regions position through the functional landmark approach.

    Thirion, Bertrand; Varoquaux, Gaël; Poline, Jean-Baptiste

    2010-01-01

    In many application of functional Magnetic Resonance Imaging (fMRI), including clinical or pharmacological studies, the definition of the location of the functional activity between subjects is crucial. While current acquisition and normalization procedures improve the accuracy of the functional signal localization, it is also important to ensure that functional foci detection yields accurate results, and reflects between-subject variability. Here we introduce a fast functional landmark detection procedure, that explicitly models the spatial variability of activation foci in the observed population. We compare this detection approach to standard statistical maps peak extraction procedures: we show that it yields more accurate results on simulations, and more reproducible results on a large cohort of subjects. These results demonstrate that explicit functional landmark modeling approaches are more effective than standard statistical mapping for brain functional focus detection. PMID:20879321

  15. Learning and geometry computational approaches

    Smith, Carl

    1996-01-01

    The field of computational learning theory arose out of the desire to for­ mally understand the process of learning. As potential applications to artificial intelligence became apparent, the new field grew rapidly. The learning of geo­ metric objects became a natural area of study. The possibility of using learning techniques to compensate for unsolvability provided an attraction for individ­ uals with an immediate need to solve such difficult problems. Researchers at the Center for Night Vision were interested in solving the problem of interpreting data produced by a variety of sensors. Current vision techniques, which have a strong geometric component, can be used to extract features. However, these techniques fall short of useful recognition of the sensed objects. One potential solution is to incorporate learning techniques into the geometric manipulation of sensor data. As a first step toward realizing such a solution, the Systems Research Center at the University of Maryland, in conjunction with the C...

  16. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    Li, Xiangrui; Lu, Zhong-Lin

    2012-01-01

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  17. Cloud computing methods and practical approaches

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  18. Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number

    Wang Xingyuan

    2010-01-01

    Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.

  19. Simple and Accurate Computations of Solvatochromic Shifts in pi -> pi* Transitions of Aromatic Chromophors

    Heinz, H; Leontidis, E; Heinz, Hendrik; Suter, Ulrich W.; Leontidis, Epameinondas

    2001-01-01

    A new approach is introduced for calculating the spectral shifts of the most bathochromic pi -> pi* transition of an aromatic chromophore in apolar environments. As an example, perylene in solid and liquid n-alkane matrices was chosen, and all shifts are calculated relative to one well-defined solid-inclusion system. It is shown that a simple two-level treatment of the solute using Huckel theory yields spectral shifts in excellent agreement with experimental results for the most prominent inclusion sites of perylene in solid n-alkane surroundings and for the dilute solutions in liquid n-alkanes. The idea is general enough to be applied to any aromatic chromophore in a nonpolar solvent matrix. In contrast to earlier treatments, this approach is based on geometry-dependent polarizabilities, employs a r^-4 dependence for the dispersion energy, is conceptually simple and computationally efficient. Different simple models based on our general approach to compute the UV spectral shifts due to solvation indicate tha...

  20. A constrained variable projection reconstruction method for photoacoustic computed tomography without accurate knowledge of transducer responses

    Sheng, Qiwei; Matthews, Thomas P; Xia, Jun; Zhu, Liren; Wang, Lihong V; Anastasio, Mark A

    2015-01-01

    Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. When the imaging system employs conventional piezoelectric ultrasonic transducers, the ideal photoacoustic (PA) signals are degraded by the transducers' acousto-electric impulse responses (EIRs) during the measurement process. If unaccounted for, this can degrade the accuracy of the reconstructed image. In principle, the effect of the EIRs on the measured PA signals can be ameliorated via deconvolution; images can be reconstructed subsequently by application of a reconstruction method that assumes an idealized EIR. Alternatively, the effect of the EIR can be incorporated into an imaging model and implicitly compensated for during reconstruction. In either case, the efficacy of the correction can be limited by errors in the assumed EIRs. In this work, a joint optimization approach to PACT image r...

  1. On the Fourier expansion method for highly accurate computation of the Voigt/complex error function in a rapid algorithm

    Abrarov, S M

    2012-01-01

    In our recent publication [1] we presented an exponential series approximation suitable for highly accurate computation of the complex error function in a rapid algorithm. In this Short Communication we describe how a simplified representation of the proposed complex error function approximation makes possible further algorithmic optimization resulting in a considerable computational acceleration without compromise on accuracy.

  2. Accurate Computation of Reduction Potentials of 4Fe−4S Clusters Indicates a Carboxylate Shift in Pyrococcus furiosus Ferredoxin

    Kepp, Kasper Planeta; Ooi, Bee Lean; Christensen, Hans Erik Mølager

    2007-01-01

    This work describes the computation and accurate reproduction of subtle shifts in reduction potentials for two mutants of the iron-sulfur protein Pyrococcus furiosus ferredoxin. The computational models involved only first-sphere ligands and differed with respect to one ligand, either acetate (as...

  3. GRID COMPUTING AND FAULT TOLERANCE APPROACH

    Pankaj Gupta,

    2011-10-01

    Full Text Available Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation orproblem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. This paper proposes a method to achieve maximum fault tolerance in the Grid environment system by using Reliability consideration by using Replication approach and Check-point approach. Fault tolerance is an important property for large scale computational grid systems, where geographically distributed nodes co-operate to execute a task. In order to achieve high level of reliability and availability, the grid infrastructure should be a foolproof fault tolerant. Since the failure of resources affects job execution fatally, fault tolerance service is essential to satisfy QOS requirement in grid computing. Commonly utilized techniques for providing fault tolerance are job check pointing and replication. Both techniques mitigate the amount of work lost due to changing system availability but can introduce significant runtime overhead. The latter largely depends on the length of check pointing interval and the chosen number of replicas, respectively. In case of complex scientific workflows where tasks can execute in well defined order reliability is another biggest challenge because of the unreliable nature of the grid resources.

  4. Toward exascale computing through neuromorphic approaches.

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  5. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm3) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm3, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm3, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm, and 1

  6. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  7. Parallel Computation in Econometrics: A Simplified Approach

    Jurgen A. Doornik; Shephard, Neil; Hendry, David F.

    2004-01-01

    Parallel computation has a long history in econometric computing, but is not at all wide spread. We believe that a major impediment is the labour cost of coding for parallel architectures. Moreover, programs for specific hardware often become obsolete quite quickly. Our approach is to take a popular matrix programming language (Ox), and implement a message-passing interface using MPI. Next, object-oriented programming allows us to hide the specific parallelization code, so that a program does...

  8. A Big Data Approach to Computational Creativity

    Varshney, Lav R; Varshney, Kush R; Bhattacharjya, Debarun; Schoergendorfer, Angela; Chee, Yi-Min

    2013-01-01

    Computational creativity is an emerging branch of artificial intelligence that places computers in the center of the creative process. Broadly, creativity involves a generative step to produce many ideas and a selective step to determine the ones that are the best. Many previous attempts at computational creativity, however, have not been able to achieve a valid selective step. This work shows how bringing data sources from the creative domain and from hedonic psychophysics together with big data analytics techniques can overcome this shortcoming to yield a system that can produce novel and high-quality creative artifacts. Our data-driven approach is demonstrated through a computational creativity system for culinary recipes and menus we developed and deployed, which can operate either autonomously or semi-autonomously with human interaction. We also comment on the volume, velocity, variety, and veracity of data in computational creativity.

  9. A novel fast and accurate pseudo-analytical simulation approach for MOAO

    Gendron, É.

    2014-08-04

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is

  10. Unilateral hyperlucency of the lung: a systematic approach to accurate radiographic interpretation

    Noh, Hyung Jun; Oh, Yu Whan; Choi, Eun Jung; Seo, Bo Kyung; Cho, Kyu Ran; Kang, Eun Young; Kim, Jung Hyuk [Korea University College of Medicine, Seoul (Korea, Republic of)

    2002-12-01

    The radiographic appearance of a unilateral hyperlucent lung is related to various conditions, the accurate radiographic interpretation of which requires a structured approach as well as an awareness of the spectrum of these entities. Firstly, it is important to determine whether a hyperlucent hemithorax is associated with artifacts resulting from rotation of the patient, grid cutoff, or the heel effect. The second step is to determine whether or not a hyperlucent lung is abnormal. Lung that is in fact normal may appear hyperlucent because of diffusely increased opacity of the opposite hemithorax. Thirdly, thoracic wall and soft tissue abnormalities such as mastectomy of Poland syndrome may cause hyperinflation. Lastly, abnormalities of lung parenchyma may result in hyperlucency. Lung abnormalities and be divided into two groups: a) obstructive or compensatory hyperinflation; and b) reduced vascular perfusion of the lung due to congenital or acquired vascular abnormalities. In this article, we describe and illustrate the imaging spectrum of these causes and outline a structured approach to accurate radiographic interpretation.

  11. Late enhanced computed tomography in Hypertrophic Cardiomyopathy enables accurate left-ventricular volumetry

    Late enhancement (LE) multi-slice computed tomography (leMDCT) was introduced for the visualization of (intra-) myocardial fibrosis in Hypertrophic Cardiomyopathy (HCM). LE is associated with adverse cardiac events. This analysis focuses on leMDCT derived LV muscle mass (LV-MM) which may be related to LE resulting in LE proportion for potential risk stratification in HCM. N=26 HCM-patients underwent leMDCT (64-slice-CT) and cardiovascular magnetic resonance (CMR). In leMDCT iodine contrast (Iopromid, 350 mg/mL; 150mL) was injected 7 minutes before imaging. Reconstructed short cardiac axis views served for planimetry. The study group was divided into three groups of varying LV-contrast. LeMDCT was correlated with CMR. The mean age was 64.2 ± 14 years. The groups of varying contrast differed in weight and body mass index (p 0.05). In the group with sufficient contrast LV-MM appeared with 172 ± 30.8 g in leMDCT vs. 165.9 ± 37.8 in CMR (p > 0.05). Overall intra-/inter-observer variability of semiautomatic assessment of LV-MM showed an accuracy of 0.9 ± 8.6 g and 0.8 ± 9.2 g in leMDCT. All leMDCT-measures correlated well with CMR (r > 0.9). LeMDCT primarily performed for LE-visualization in HCM allows for accurate LV-volumetry including LV-MM in > 90 % of the cases. (orig.)

  12. Late enhanced computed tomography in Hypertrophic Cardiomyopathy enables accurate left-ventricular volumetry

    Langer, Christoph; Lutz, M.; Kuehl, C.; Frey, N. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany); Partner Site Hamburg/Kiel/Luebeck, DZHK (German Centre for Cardiovascular Research), Kiel (Germany); Both, M.; Sattler, B.; Jansen, O; Schaefer, P. [Christian-Albrechts-Universitaet Kiel, Department of Diagnostic Radiology, University Medical Center Schleswig-Holstein (Germany); Harders, H.; Eden, M. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany)

    2014-10-15

    Late enhancement (LE) multi-slice computed tomography (leMDCT) was introduced for the visualization of (intra-) myocardial fibrosis in Hypertrophic Cardiomyopathy (HCM). LE is associated with adverse cardiac events. This analysis focuses on leMDCT derived LV muscle mass (LV-MM) which may be related to LE resulting in LE proportion for potential risk stratification in HCM. N=26 HCM-patients underwent leMDCT (64-slice-CT) and cardiovascular magnetic resonance (CMR). In leMDCT iodine contrast (Iopromid, 350 mg/mL; 150mL) was injected 7 minutes before imaging. Reconstructed short cardiac axis views served for planimetry. The study group was divided into three groups of varying LV-contrast. LeMDCT was correlated with CMR. The mean age was 64.2 ± 14 years. The groups of varying contrast differed in weight and body mass index (p < 0.05). In the group with good LV-contrast assessment of LV-MM resulted in 147.4 ± 64.8 g in leMDCT vs. 147.1 ± 65.9 in CMR (p > 0.05). In the group with sufficient contrast LV-MM appeared with 172 ± 30.8 g in leMDCT vs. 165.9 ± 37.8 in CMR (p > 0.05). Overall intra-/inter-observer variability of semiautomatic assessment of LV-MM showed an accuracy of 0.9 ± 8.6 g and 0.8 ± 9.2 g in leMDCT. All leMDCT-measures correlated well with CMR (r > 0.9). LeMDCT primarily performed for LE-visualization in HCM allows for accurate LV-volumetry including LV-MM in > 90 % of the cases. (orig.)

  13. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  14. Accurate modeling of size and strain broadening in the Rietveld refinement: The {open_quotes}double-Voigt{close_quotes} approach

    Balzar, D. [Ruder Boskovic Inst., Zagreb (Croatia); Ledbetter, H. [National Inst. of Standards and Technology, Boulder, CO (United States)

    1995-12-31

    In the {open_quotes}double-Voigt{close_quotes} approach, an exact Voigt function describes both size- and strain-broadened profiles. The lattice strain is defined in terms of physically credible mean-square strain averaged over a distance in the diffracting domains. Analysis of Fourier coefficients in a harmonic approximation for strain coefficients leads to the Warren-Averbach method for the separation of size and strain contributions to diffraction line broadening. The model is introduced in the Rietveld refinement program in the following way: Line widths are modeled with only four parameters in the isotropic case. Varied parameters are both surface- and volume-weighted domain sizes and root-mean-square strains averaged over two distances. Refined parameters determine the physically broadened Voigt line profile. Instrumental Voigt line profile parameters are added to obtain the observed (Voigt) line profile. To speed computation, the corresponding pseudo-Voigt function is calculated and used as a fitting function in refinement. This approach allows for both fast computer code and accurate modeling in terms of physically identifiable parameters.

  15. Hybrid soft computing approaches research and applications

    Dutta, Paramartha; Chakraborty, Susanta

    2016-01-01

    The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis,  (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.

  16. Computational Chemical Imaging for Cardiovascular Pathology: Chemical Microscopic Imaging Accurately Determines Cardiac Transplant Rejection

    Tiwari, Saumya; Reddy, Vijaya B.; Bhargava, Rohit; Raman, Jaishankar

    2015-01-01

    Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR) spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients’ biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures. PMID:25932912

  17. Computational chemical imaging for cardiovascular pathology: chemical microscopic imaging accurately determines cardiac transplant rejection.

    Saumya Tiwari

    Full Text Available Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients' biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures.

  18. A novel approach to accurate portal dosimetry using CCD-camera based EPIDs

    A new method for portal dosimetry using CCD camera-based electronic portal imaging devices (CEPIDs) is demonstrated. Unlike previous approaches, it is not based on a priori assumptions concerning CEPID cross-talk characteristics. In this method, the nonsymmetrical and position-dependent cross-talk is determined by directly imaging a set of cross-talk kernels generated by small fields ('pencil beams') exploiting the high signal-to-noise ratio of a cooled CCD camera. Signal calibration is achieved by imaging two reference fields. Next, portal dose images (PDIs) can be derived from electronic portal dose images (EPIs), in a fast forward-calculating iterative deconvolution. To test the accuracy of these EPI-based PDIs, a comparison is made to PDIs obtained by scanning diode measurements. The method proved accurate to within 0.2±0.7% (1 SD), for on-axis symmetrical and asymmetrical fields with different field widths and homogeneous phantom thicknesses, off-axis Alderson thorax fields and a strongly modulated IMRT field. Hence, the proposed method allows for fast, accurate portal dosimetry. In addition, it is demonstrated that the CEPID cross-talk signal is not only induced by optical photon reflection and scatter within the CEPID structure, but also by high-energy back-scattered radiation from CEPID elements (mirror and housing) towards the fluorescent screen

  19. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system

  20. Accurate characterization of mask defects by combination of phase retrieval and deterministic approach

    Park, Min-Chul; Leportier, Thibault; Kim, Wooshik; Song, Jindong

    2016-06-01

    In this paper, we present a method to characterize not only shape but also depth of defects in line and space mask patterns. Features in a mask are too fine for conventional imaging system to resolve them and coherent imaging system providing only the pattern diffracted by the mask are used. Then, phase retrieval methods may be applied, but the accuracy it too low to determine the exact shape of the defect. Deterministic methods have been proposed to characterize accurately the defect, but it requires a reference pattern. We propose to use successively phase retrieval algorithm to retrieve the general shape of the mask and then deterministic approach to characterize precisely the defects detected.

  1. Computational Approach for Developing Blood Pump

    Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides an overview of the computational approach to developing a ventricular assist device (VAD) which utilizes NASA aerospace technology. The VAD is used as a temporary support to sick ventricles for those who suffer from late stage congestive heart failure (CHF). The need for donor hearts is much greater than their availability, and the VAD is seen as a bridge-to-transplant. The computational issues confronting the design of a more advanced, reliable VAD include the modelling of viscous incompressible flow. A computational approach provides the possibility of quantifying the flow characteristics, which is especially valuable for analyzing compact design with highly sensitive operating conditions. Computational fluid dynamics (CFD) and rocket engine technology has been applied to modify the design of a VAD which enabled human transplantation. The computing requirement for this project is still large, however, and the unsteady analysis of the entire system from natural heart to aorta involves several hundred revolutions of the impeller. Further study is needed to assess the impact of mechanical VADs on the human body

  2. Efficient and Accurate Computational Framework for Injector Design and Analysis Project

    National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...

  3. Handbook of computational approaches to counterterrorism

    Subrahmanian, VS

    2012-01-01

    Terrorist groups throughout the world have been studied primarily through the use of social science methods. However, major advances in IT during the past decade have led to significant new ways of studying terrorist groups, making forecasts, learning models of their behaviour, and shaping policies about their behaviour. Handbook of Computational Approaches to Counterterrorism provides the first in-depth look at how advanced mathematics and modern computing technology is shaping the study of terrorist groups. This book includes contributions from world experts in the field, and presents extens

  4. Accurate and interpretable nanoSAR models from genetic programming-based decision tree construction approaches.

    Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z

    2016-09-01

    The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models. PMID:26956430

  5. Isogenies of Elliptic Curves: A Computational Approach

    Shumow, Daniel

    2009-01-01

    Isogenies, the mappings of elliptic curves, have become a useful tool in cryptology. These mathematical objects have been proposed for use in computing pairings, constructing hash functions and random number generators, and analyzing the reducibility of the elliptic curve discrete logarithm problem. With such diverse uses, understanding these objects is important for anyone interested in the field of elliptic curve cryptography. This paper, targeted at an audience with a knowledge of the basic theory of elliptic curves, provides an introduction to the necessary theoretical background for understanding what isogenies are and their basic properties. This theoretical background is used to explain some of the basic computational tasks associated with isogenies. Herein, algorithms for computing isogenies are collected and presented with proofs of correctness and complexity analyses. As opposed to the complex analytic approach provided in most texts on the subject, the proofs in this paper are primarily algebraic i...

  6. An accurate, fast, mathematically robust, universal, non-iterative algorithm for computing multi-component diffusion velocities

    Ambikasaran, Sivaram

    2015-01-01

    Using accurate multi-component diffusion treatment in numerical combustion studies remains formidable due to the computational cost associated with solving for diffusion velocities. To obtain the diffusion velocities, for low density gases, one needs to solve the Stefan-Maxwell equations along with the zero diffusion flux criteria, which scales as $\\mathcal{O}(N^3)$, when solved exactly. In this article, we propose an accurate, fast, direct and robust algorithm to compute multi-component diffusion velocities. To our knowledge, this is the first provably accurate algorithm (the solution can be obtained up to an arbitrary degree of precision) scaling at a computational complexity of $\\mathcal{O}(N)$ in finite precision. The key idea involves leveraging the fact that the matrix of the reciprocal of the binary diffusivities, $V$, is low rank, with its rank being independent of the number of species involved. The low rank representation of matrix $V$ is computed in a fast manner at a computational complexity of $\\...

  7. A Novel PCR-Based Approach for Accurate Identification of Vibrio parahaemolyticus.

    Li, Ruichao; Chiou, Jiachi; Chan, Edward Wai-Chi; Chen, Sheng

    2016-01-01

    A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the bla CARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that bla CARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this bla CARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of bla CARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by bla CARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification. PMID:26858713

  8. Advanced computational approaches to biomedical engineering

    Saha, Punam K; Basu, Subhadip

    2014-01-01

    There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig

  9. Computational Approaches to Nucleic Acid Origami.

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms. PMID:26348196

  10. Fast and Accurate Computation Tools for Gravitational Waveforms from Binary Sistems with any Orbital Eccentricity

    Pierro, V; Spallicci, A D; Laserra, E; Recano, F

    2001-01-01

    The relevance of orbital eccentricity in the detection of gravitational radiation from (steady state) binary stars is emphasized. Computationnally effective fast and accurate)tools for constructing gravitational wave templates from binary stars with any orbital eccentricity are introduced, including tight estimation criteria of the pertinent truncation and approximation errors.

  11. Accurate local density photoionization cross sections by LCAO Stieltjes imaging approach

    Photoionization cross sections are evaluated at the local density level avoiding any potential shape approximation by employing the Stieltjes imaging (ST) technique in conjunction with large (STO ) basis LCAO calculations. The ST technique proves accurate in comparison with full continuum calculation on the noble gases ionization. Several choices for the final-state potentials, including different exchange-correlation potentials, are tested on the noble gases, as well as in H2O and CO. Finally, several molecules are investigated employing the transition-state Xα potential and the results compared with available experimental data and previous calculations. The accuracy of the proposed approach appears quite comparable to the ab initio static-exchange level, removing spurious resonances associated to the muffin-tin approximation, and should prove useful in the analysis and interpretation of cross-section data for large systems, where it is shown that basis-set requirements become less stringent. The accuracy obtainable for the cross-section profiles allows close comparison with experimental data, which, although showing good general agreement for the shape of the cross-section profile, indicates the need of a further refinement of the theoretical model. 55 refs., 15 figs., 1 tab

  12. A fourth order accurate finite difference scheme for the computation of elastic waves

    Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.

    1986-01-01

    A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.

  13. An event-driven method to simulate Filippov systems with accurate computing of sliding motions

    Piiroinen, PT; Kuznetsov, YA

    2005-01-01

    This paper describes how to use smooth solvers for simulation of a class of piecewise smooth dynamical systems, called Filippov systems, with discontinuous vector fields. In these systems constrained motion along a discontinuity surface (so-called sliding) is possible and require special treatment numerically. The introduced algorithms are based on an extension to Filippov's method to stabilize the sliding flow together with accurate detection of the entrance and exit of sliding regions. The ...

  14. Computer Forensics Education - the Open Source Approach

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  15. Introducing Computational Approaches in Intermediate Mechanics

    Cook, David M.

    2006-12-01

    In the winter of 2003, we at Lawrence University moved Lagrangian mechanics and rigid body dynamics from a required sophomore course to an elective junior/senior course, freeing 40% of the time for computational approaches to ordinary differential equations (trajectory problems, the large amplitude pendulum, non-linear dynamics); evaluation of integrals (finding centers of mass and moment of inertia tensors, calculating gravitational potentials for various sources); and finding eigenvalues and eigenvectors of matrices (diagonalizing the moment of inertia tensor, finding principal axes), and to generating graphical displays of computed results. Further, students begin to use LaTeX to prepare some of their submitted problem solutions. Placed in the middle of the sophomore year, this course provides the background that permits faculty members as appropriate to assign computer-based exercises in subsequent courses. Further, students are encouraged to use our Computational Physics Laboratory on their own initiative whenever that use seems appropriate. (Curricular development supported in part by the W. M. Keck Foundation, the National Science Foundation, and Lawrence University.)

  16. Predicting microbial interactions through computational approaches.

    Li, Chenhao; Lim, Kun Ming Kenneth; Chng, Kern Rei; Nagarajan, Niranjan

    2016-06-01

    Microorganisms play a vital role in various ecosystems and characterizing interactions between them is an essential step towards understanding the organization and function of microbial communities. Computational prediction has recently become a widely used approach to investigate microbial interactions. We provide a thorough review of emerging computational methods organized by the type of data they employ. We highlight three major challenges in inferring interactions using metagenomic survey data and discuss the underlying assumptions and mathematics of interaction inference algorithms. In addition, we review interaction prediction methods relying on metabolic pathways, which are increasingly used to reveal mechanisms of interactions. Furthermore, we also emphasize the importance of mining the scientific literature for microbial interactions - a largely overlooked data source for experimentally validated interactions. PMID:27025964

  17. Computational approaches to analogical reasoning current trends

    Richard, Gilles

    2014-01-01

    Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...

  18. Interacting electrons theory and computational approaches

    Martin, Richard M; Ceperley, David M

    2016-01-01

    Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. Many-body methods are becoming essential tools vital for quantitative calculations and understanding materials phenomena in physics, chemistry, materials science and other fields. This book provides a unified exposition of the most-used tools: many-body perturbation theory, dynamical mean field theory and quantum Monte Carlo simulations. Each topic is introduced with a less technical overview for a broad readership, followed by in-depth descriptions and mathematical formulation. Practical guidelines, illustrations and exercises are chosen to enable readers to appreciate the complementary approaches, their relationships, and the advantages and disadvantages of each method. This book is designed for graduate students and researchers who want to use and understand these advanced computational tools, get a broad overview, and acquire a basis for participating in new developments.

  19. Accurate and Efficient Computations of the Greeks for Options Near Expiry Using the Black-Scholes Equations

    Darae Jeong

    2016-01-01

    Full Text Available We investigate the accurate computations for the Greeks using the numerical solutions of the Black-Scholes partial differential equation. In particular, we study the behaviors of the Greeks close to the maturity time and in the neighborhood around the strike price. The Black-Scholes equation is discretized using a nonuniform finite difference method. We propose a new adaptive time-stepping algorithm based on local truncation error. As a test problem for our numerical method, we consider a European cash-or-nothing call option. To show the effect of the adaptive stepping strategy, we calculate option price and its Greeks with various tolerances. Several numerical results confirm that the proposed method is fast, accurate, and practical in computing option price and the Greeks.

  20. An efficient and accurate method for computation of energy release rates in beam structures with longitudinal cracks

    Blasques, José Pedro Albergaria Amaral; Bitsche, Robert

    2015-01-01

    Crack Closure Technique is used for computation of strain energy release rates. The devised framework was employed for analysis of cracks in beams with different cross section geometries. The results show that the accuracy of the proposed method is comparable to that of conventional three......This paper proposes a novel, efficient, and accurate framework for fracture analysis of beam structures with longitudinal cracks. The three-dimensional local stress field is determined using a high-fidelity beam model incorporating a finite element based cross section analysis tool. The Virtual......-dimensional solid finite element models while using only a fraction of the computation time....

  1. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  2. Efficient and accurate statistical analog yield optimization and variation-aware circuit sizing based on computational intelligence techniques

    Liu, Bo; Fernández, Francisco V.; Gielen, Georges

    2011-01-01

    In nanometer complementary metal-oxide-semiconductor technologies, worst-case design methods and response-surface-based yield optimization methods face challenges in accuracy. Monte-Carlo (MC) simulation is general and accurate for yield estimation, but its efficiency is not high enough to make MC-based analog yield optimization, which requires many yield estimations, practical. In this paper, techniques inspired by computational intelligence are used to speed up yield optimization without sa...

  3. Accurate and Efficient Computations of the Greeks for Options Near Expiry Using the Black-Scholes Equations

    Darae Jeong; Minhyun Yoo; Junseok Kim

    2016-01-01

    We investigate the accurate computations for the Greeks using the numerical solutions of the Black-Scholes partial differential equation. In particular, we study the behaviors of the Greeks close to the maturity time and in the neighborhood around the strike price. The Black-Scholes equation is discretized using a nonuniform finite difference method. We propose a new adaptive time-stepping algorithm based on local truncation error. As a test problem for our numerical method, we consider a Eur...

  4. Sculpting the band gap: a computational approach.

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D A

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  5. Fast and accurate earthquake location within complex medium using a hybrid global-local inversion approach

    Chaoying Bai; Rui Zhao; Stewart Greenhalgh

    2009-01-01

    A novel hybrid approach for earthquake location is proposed which uses a combined coarse global search and fine local inversion with a minimum search routine, plus an examination of the root mean squares (RMS) error distribution. The method exploits the advantages of network ray tracing and robust formulation of the Frechet derivatives to simultaneously update all possible initial source parameters around most local minima (including the global minimum) in the solution space, and finally to determine the likely global solution. Several synthetic examples involving a 3-D complex velocity model and a challenging source-receiver layout are used to demonstrate the capability of the newly-developed method. This new global-local hybrid solution technique not only incorporates the significant benefits of our recently published hypocenter determination procedure for multiple earthquake parameters, but also offers the attractive features of global optimal searching in the RMS travel time error distribution. Unlike the traditional global search method, for example, the Monte Carlo approach, where millions of tests have to be done to find the final global solution, the new method only conducts a matrix inversion type local search but does it multiple times simultaneously throughout the model volume to seek a global solution. The search is aided by inspection of the RMS error distribution. Benchmark tests against two popular approaches, the direct grid search method and the oct-tree important sampling method, indicate that the hybrid global-local inversion yields comparable location accuracy and is not sensitive to modest level of noise data, but more importantly it offers two-order of magnitude speed-up in computational effort. Such an improvement, combined with high accuracy, make it a promising hypocenter determination scheme in earthquake early warning, tsunami early warning, rapid hazard assessment and emergency response after strong earthquake occurrence.

  6. Computational modeling approaches in gonadotropin signaling.

    Ayoub, Mohammed Akli; Yvinec, Romain; Crépieux, Pascale; Poupon, Anne

    2016-07-01

    Follicle-stimulating hormone and LH play essential roles in animal reproduction. They exert their function through binding to their cognate receptors, which belong to the large family of G protein-coupled receptors. This recognition at the plasma membrane triggers a plethora of cellular events, whose processing and integration ultimately lead to an adapted biological response. Understanding the nature and the kinetics of these events is essential for innovative approaches in drug discovery. The study and manipulation of such complex systems requires the use of computational modeling approaches combined with robust in vitro functional assays for calibration and validation. Modeling brings a detailed understanding of the system and can also be used to understand why existing drugs do not work as well as expected, and how to design more efficient ones. PMID:27165991

  7. Accurate computation of wave loads on a bottom fixed circular cylinder

    Paulsen, Bo Terp; Bredmose, Henrik; Bingham, Harry B.

    2012-01-01

    to a range of other problems. The central idea is to drive an inner CFD model that resolves the flow around the structure with an outer wave model that is based on potential flow theory. By letting the potential flow solver describe the waves in the outer flow domain and the Navier-Stokes solver...... describe the flow in the inner domain a fast and accurate description of wave loads on offshore structures is obtained, even for breaking waves. Engsig-Karup et. al [1] have recently developed a fully nonlinear potential flow solver (OceanWave3D) to represent propagation and development of fully nonlinear...

  8. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm

  9. Computer subroutine ISUDS accurately solves large system of simultaneous linear algebraic equations

    Collier, G.

    1967-01-01

    Computer program, an Iterative Scheme Using a Direct Solution, obtains double precision accuracy using a single-precision coefficient matrix. ISUDS solves a system of equations written in matrix form as AX equals B, where A is a square non-singular coefficient matrix, X is a vector, and B is a vector.

  10. A novel approach for accurate radiative transfer in cosmological hydrodynamic simulations

    Petkova, Margarita; Springel, Volker

    2011-08-01

    We present a numerical implementation of radiative transfer based on an explicitly photon-conserving advection scheme, where radiative fluxes over the cell interfaces of a structured or unstructured mesh are calculated with a second-order reconstruction of the intensity field. The approach employs a direct discretization of the radiative transfer equation in Boltzmann form with adjustable angular resolution that, in principle, works equally well in the optically-thin and optically-thick regimes. In our most general formulation of the scheme, the local radiation field is decomposed into a linear sum of directional bins of equal solid angle, tessellating the unit sphere. Each of these 'cone fields' is transported independently, with constant intensity as a function of the direction within the cone. Photons propagate at the speed of light (or optionally using a reduced speed of light approximation to allow larger time-steps), yielding a fully time-dependent solution of the radiative transfer equation that can naturally cope with an arbitrary number of sources, as well as with scattering. The method casts sharp shadows, subject to the limitations induced by the adopted angular resolution. If the number of point sources is small and scattering is unimportant, our implementation can alternatively treat each source exactly in angular space, producing shadows whose sharpness is only limited by the grid resolution. A third hybrid alternative is to treat only a small number of the locally most luminous point sources explicitly, with the rest of the radiation intensity followed in a radiative diffusion approximation. We have implemented the method in the moving-mesh code AREPO, where it is coupled to the hydrodynamics in an operator-splitting approach that subcycles the radiative transfer alternately with the hydrodynamical evolution steps. We also discuss our treatment of basic photon sink processes relevant to cosmological reionization, with a chemical network that can

  11. Necessary conditions for accurate computations of three-body partial decay widths

    Garrido, E; Fedorov, D V

    2008-01-01

    The partial width for decay of a resonance into three fragments is largely determined at distances where the energy is smaller than the effective potential producing the corresponding wave function. At short distances the many-body properties are accounted for by preformation or spectroscopic factors. We use the adiabatic expansion method combined with the WKB approximation to obtain the indispensable cluster model wave functions at intermediate and larger distances. We test the concept by deriving conditions for the minimal basis expressed in terms of partial waves and radial nodes. We compare results for different effective interactions and methods. Agreement is found with experimental values for a sufficiently large basis. We illustrate the ideas with realistic examples from $\\alpha$-emission of $^{12}$C and two-proton emission of $^{17}$Ne. Basis requirements for accurate momentum distributions are briefly discussed.

  12. Accurate computation of surface stresses and forces with immersed boundary methods

    Goza, Andres; Morley, Benjamin; Colonius, Tim

    2016-01-01

    Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is a...

  13. Accurate and efficient computation of the Green's tensor for stratified media

    Gay-Balmaz, P.; Martin, O. J. F.; Paulus, M.

    2000-01-01

    We present a technique for the computation of the Green's tensor in three-dimensional stratified media composed of an arbitrary number of layers with different permittivities and permeabilities (including metals with a complex permittivity). The practical implementation of this technique is discussed in detail. In particular, we show how to efficiently handle the singularities occurring in Sommerfeld integrals, by deforming the integration path in the complex plane. Examples assess the accura...

  14. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  15. A Review of Computer Vision based Algorithms for accurate and efficient Object Detection

    Aneissha Chebolu

    2013-05-01

    Full Text Available In this paper two vision-based algorithms are adopted to locate and identify the objects and obstacles from the environment. In recent days, robot vision and navigation are emerging as essential services especially in hazardous environments. In this work, two vision based techniques such as color based thresholding and template matching – both correlation based similarity measure and FFT (Fast Fourier Transform based have been adopted and used for object identification and classification using the image captured in CCD camera attached to the robotic arm. Then, the robotic arm manipulator is integrated with the computer for futher manipulation of the objects based on the application.

  16. Accurate Experiment to Computation Coupling for Understanding QH-mode physics using NIMROD

    King, J. R.; Burrell, K. H.; Garofalo, A. M.; Groebner, R. J.; Hanson, J. D.; Hebert, J. D.; Hudson, S. R.; Pankin, A. Y.; Kruger, S. E.; Snyder, P. B.

    2015-11-01

    It is desirable to have an ITER H-mode regime that is quiescent to edge-localized modes (ELMs). The quiescent H-mode (QH-mode) with edge harmonic oscillations (EHO) is one such regime. High quality equilibria are essential for accurate EHO simulations with initial-value codes such as NIMROD. We include profiles outside the LCFS which generate associated currents when we solve the Grad-Shafranov equation with open-flux regions using the NIMEQ solver. The new solution is an equilibrium that closely resembles the original reconstruction (which does not contain open-flux currents). This regenerated equilibrium is consistent with the profiles that are measured by the high quality diagnostics on DIII-D. Results from nonlinear NIMROD simulations of the EHO are presented. The full measured rotation profiles are included in the simulation. The simulation develops into a saturated state. The saturation mechanism of the EHO is explored and simulation is compared to magnetic-coil measurements. This work is currently supported in part by the US DOE Office of Science under awards DE-FC02-04ER54698, DE-AC02-09CH11466 and the SciDAC Center for Extended MHD Modeling.

  17. A coupled BEM/scaled boundary FEM formulation for accurate computations in linear elastic fracture mechanics.

    Bird, G. E.; Trevelyan, J; Augarde, C.E.

    2010-01-01

    Issues relating to the practical implementation of the coupled boundary element–scaled boundary finite element method are addressed in this paper. A detailed approach highlights fully the process of applying boundary conditions, including the treatment of examples in which the assumptions made in previous work are no longer valid. Verification of the method is undertaken by means of estimating stress intensity factors and comparing them against analytical solutions. The coupled algorithm show...

  18. Music Genre Classification Systems - A Computational Approach

    Ahrendt, Peter

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought...... that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular systems which use the raw audio signal as input to estimate the corresponding genre. This is in...... contrast to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...

  19. A computational approach to negative priming

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  20. Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters

    Strutzenberg, Louise L.; Williams, Brandon R.

    2011-01-01

    Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.

  1. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  2. A fast and accurate method to compute the mass return from multiple stellar populations

    Calura, F; Nipoti, C

    2013-01-01

    The mass returned to the ambient medium by aging stellar populations over cosmological times sums up to a significant fraction (20% - 30% or more) of their initial mass. This continuous mass injection plays a fundamental role in phenomena such as galaxy formation and evolution, fueling of supermassive black holes in galaxies and the consequent (negative and positive) feedback phenomena, and the origin of multiple stellar populations in globular clusters. In numerical simulations the calculation of the mass return can be time consuming, since it requires at each time step the evaluation of a convolution integral over the whole star formation history, so the computational time increases quadratically with the number of time-steps. The situation can be especially critical in hydrodynamical simulations, where different grid points are characterized by different star formation histories, and the gas cooling and heating times are shorter by orders of magnitude than the characteristic stellar lifetimes. In this pape...

  3. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  4. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Shanis Barnard

    Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is

  5. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  6. SPEECH RECOGNITION - A COMPUTER MEDIATED APPROACH

    Kaliyaperumal Karthikeyan

    2012-01-01

    The computer revolution is now well advanced, but although we see a starting preparation of computer machines in many forms of work people do, the domain of computers is still significantly small because of the specialized training needed to use them and the lack of intelligence in computer systems. In the history of computer science five generations have passed by, each adding a new innovative technology that brought computers nearer and nearer to the people. Now it is sixth generation, whos...

  7. An accurate and scalable O(N) algorithm for First-Principles Molecular Dynamics computations on petascale computers and beyond

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-03-01

    We present a truly scalable First-Principles Molecular Dynamics algorithm with O(N) complexity and fully controllable accuracy, capable of simulating systems of sizes that were previously impossible with this degree of accuracy. By avoiding global communication, we have extended W. Kohn's condensed matter ``nearsightedness'' principle to a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wavefunctions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 100,000 atoms on 100,000 processors, with a wall-clock time of the order of one minute per molecular dynamics time step. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  8. A Desktop Grid Computing Approach for Scientific Computing and Visualization

    Constantinescu-Fuløp, Zoran

    2008-01-01

    Scientific Computing is the collection of tools, techniques, and theories required to solve on a computer, mathematical models of problems from science and engineering, and its main goal is to gain insight in such problems. Generally, it is difficult to understand or communicate information from complex or large datasets generated by Scientific Computing methods and techniques (computational simulations, complex experiments, observational instruments etc.). Therefore, support of Scientific Vi...

  9. Computational approaches to predict bacteriophage-host relationships.

    Edwards, Robert A; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E

    2016-03-01

    Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus-host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage-host signals. Sequence homology approaches are the most effective at identifying known phage-host pairs. Compositional and abundance-based methods contain significant signal for phage-host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage-host relationships, with potential relevance for medical and industrial applications. PMID:26657537

  10. Trajectory Evaluation of Rotor-Flying Robots Using Accurate Inverse Computation Based on Algorithm Differentiation

    Yuqing He

    2014-01-01

    Full Text Available Autonomous maneuvering flight control of rotor-flying robots (RFR is a challenging problem due to the highly complicated structure of its model and significant uncertainties regarding many aspects of the field. As a consequence, it is difficult in many cases to decide whether or not a flight maneuver trajectory is feasible. It is necessary to conduct an analysis of the flight maneuvering ability of an RFR prior to test flight. Our aim in this paper is to use a numerical method called algorithm differentiation (AD to solve this problem. The basic idea is to compute the internal state (i.e., attitude angles and angular rates and input profiles based on predetermined maneuvering trajectory information denoted by the outputs (i.e., positions and yaw angle and their higher-order derivatives. For this purpose, we first present a model of the RFR system and show that it is flat. We then cast the procedure for obtaining the required state/input based on the desired outputs as a static optimization problem, which is solved using AD and a derivative based optimization algorithm. Finally, we test our proposed method using a flight maneuver trajectory to verify its performance.

  11. Towards an accurate and computationally-efficient modelling of Fe(II)-based spin crossover materials.

    Vela, Sergi; Fumanal, Maria; Ribas-Arino, Jordi; Robert, Vincent

    2015-07-01

    The DFT + U methodology is regarded as one of the most-promising strategies to treat the solid state of molecular materials, as it may provide good energetic accuracy at a moderate computational cost. However, a careful parametrization of the U-term is mandatory since the results may be dramatically affected by the selected value. Herein, we benchmarked the Hubbard-like U-term for seven Fe(ii)N6-based pseudo-octahedral spin crossover (SCO) compounds, using as a reference an estimation of the electronic enthalpy difference (ΔHelec) extracted from experimental data (T1/2, ΔS and ΔH). The parametrized U-value obtained for each of those seven compounds ranges from 2.37 eV to 2.97 eV, with an average value of U = 2.65 eV. Interestingly, we have found that this average value can be taken as a good starting point since it leads to an unprecedented mean absolute error (MAE) of only 4.3 kJ mol(-1) in the evaluation of ΔHelec for the studied compounds. Moreover, by comparing our results on the solid state and the gas phase of the materials, we quantify the influence of the intermolecular interactions on the relative stability of the HS and LS states, with an average effect of ca. 5 kJ mol(-1), whose sign cannot be generalized. Overall, the findings reported in this manuscript pave the way for future studies devoted to understand the crystalline phase of SCO compounds, or the adsorption of individual molecules on organic or metallic surfaces, in which the rational incorporation of the U-term within DFT + U yields the required energetic accuracy that is dramatically missing when using bare-DFT functionals. PMID:26040609

  12. Accurate micro-computed tomography imaging of pore spaces in collagen-based scaffold.

    Zidek, Jan; Vojtova, Lucy; Abdel-Mohsen, A M; Chmelik, Jiri; Zikmund, Tomas; Brtnikova, Jana; Jakubicek, Roman; Zubal, Lukas; Jan, Jiri; Kaiser, Jozef

    2016-06-01

    In this work we have used X-ray micro-computed tomography (μCT) as a method to observe the morphology of 3D porous pure collagen and collagen-composite scaffolds useful in tissue engineering. Two aspects of visualizations were taken into consideration: improvement of the scan and investigation of its sensitivity to the scan parameters. Due to the low material density some parts of collagen scaffolds are invisible in a μCT scan. Therefore, here we present different contrast agents, which increase the contrast of the scanned biopolymeric sample for μCT visualization. The increase of contrast of collagenous scaffolds was performed with ceramic hydroxyapatite microparticles (HAp), silver ions (Ag(+)) and silver nanoparticles (Ag-NPs). Since a relatively small change in imaging parameters (e.g. in 3D volume rendering, threshold value and μCT acquisition conditions) leads to a completely different visualized pattern, we have optimized these parameters to obtain the most realistic picture for visual and qualitative evaluation of the biopolymeric scaffold. Moreover, scaffold images were stereoscopically visualized in order to better see the 3D biopolymer composite scaffold morphology. However, the optimized visualization has some discontinuities in zoomed view, which can be problematic for further analysis of interconnected pores by commonly used numerical methods. Therefore, we applied the locally adaptive method to solve discontinuities issue. The combination of contrast agent and imaging techniques presented in this paper help us to better understand the structure and morphology of the biopolymeric scaffold that is crucial in the design of new biomaterials useful in tissue engineering. PMID:27153826

  13. Computability and Analysis, a Historical Approach

    Brattka, Vasco

    2016-01-01

    The history of computability theory and and the history of analysis are surprisingly intertwined since the beginning of the twentieth century. For one, \\'Emil Borel discussed his ideas on computable real number functions in his introduction to measure theory. On the other hand, Alan Turing had computable real numbers in mind when he introduced his now famous machine model. Here we want to focus on a particular aspect of computability and analysis, namely on computability properties of theorem...

  14. How accurate is image-free computer navigation for hip resurfacing arthroplasty? An anatomical investigation

    The existing studies concerning image-free navigated implantation of hip resurfacing arthroplasty are based on analysis of the accuracy of conventional biplane radiography. Studies have shown that these measurements in biplane radiography are imprecise and that precision is improved by use of three-dimensional (3D) computer tomography (CT) scans. To date, the accuracy of image-free navigation devices for hip resurfacing has not been investigated using CT scans, and anteversion accuracy has not been assessed at all. Furthermore, no study has tested the reliability of the navigation software concerning the automatically calculated implant position. The purpose of our study was to analyze the accuracy of varus-valgus and anteversion using an image-free hip resurfacing navigation device. The reliability of the software-calculated implant position was also determined. A total of 32 femoral hip resurfacing components were implanted on embalmed human femurs using an image-free navigation device. In all, 16 prostheses were implanted with the proposed position generated by the navigation software; the 16 prostheses were inserted in an optimized valgus position. A 3D CT scan was undertaken before and after operation. The difference between the measured and planned varus-valgus angle averaged 1 deg (mean±standard deviation (SD): group I, 1 deg±2 deg; group II, 1 deg±1 deg). The mean±SD difference between femoral neck anteversion and anteversion of the implant was 4 deg (group I, 4 deg±4 deg; group II, 4 deg±3 deg). The software-calculated implant position differed 7 deg±8 deg from the measured neck-shaft angle. These measured accuracies did not differ significantly between the two groups. Our study proved the high accuracy of the navigation device concerning the most important biomechanical factor: the varus-valgus angle. The software calculation of the proposed implant position has been shown to be inaccurate and needs improvement. Hence, manual adjustment of the

  15. Accurate treatments of electrostatics for computer simulations of biological systems: A brief survey of developments and existing problems

    Yi, Sha-Sha; Pan, Cong; Hu, Zhong-Han

    2015-12-01

    Modern computer simulations of biological systems often involve an explicit treatment of the complex interactions among a large number of molecules. While it is straightforward to compute the short-ranged Van der Waals interaction in classical molecular dynamics simulations, it has been a long-lasting issue to develop accurate methods for the longranged Coulomb interaction. In this short review, we discuss three types of methodologies for the accurate treatment of electrostatics in simulations of explicit molecules: truncation-type methods, Ewald-type methods, and mean-field-type methods. Throughout the discussion, we brief the formulations and developments of these methods, emphasize the intrinsic connections among the three types of methods, and focus on the existing problems which are often associated with the boundary conditions of electrostatics. This brief survey is summarized with a short perspective on future trends along the method developments and applications in the field of biological simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 91127015 and 21522304) and the Open Project from the State Key Laboratory of Theoretical Physics, and the Innovation Project from the State Key Laboratory of Supramolecular Structure and Materials.

  16. Accurate computations of the structures and binding energies of the imidazole⋯benzene and pyrrole⋯benzene complexes

    Highlights: • We have computed accurate binding energies of two NH⋯π hydrogen bonds. • We compare to results from dispersion-corrected density-functional theory. • A double-hybrid functional with explicit correlation has been proposed. • First results of explicitly-correlated ring-coupled-cluster theory are presented. • A double-hybrid functional with random-phase approximation is investigated. - Abstract: Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯benzene and pyrrole⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller–Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation

  17. Accurate computations of the structures and binding energies of the imidazole⋯benzene and pyrrole⋯benzene complexes

    Ahnen, Sandra; Hehn, Anna-Sophia [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Vogiatzis, Konstantinos D. [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede-Straße 1a, D-76131 Karlsruhe (Germany); Trachsel, Maria A.; Leutwyler, Samuel [Department of Chemistry and Biochemistry, University of Bern, Freiestrasse 3, CH-3012 Bern (Switzerland); Klopper, Wim, E-mail: klopper@kit.edu [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede-Straße 1a, D-76131 Karlsruhe (Germany)

    2014-09-30

    Highlights: • We have computed accurate binding energies of two NH⋯π hydrogen bonds. • We compare to results from dispersion-corrected density-functional theory. • A double-hybrid functional with explicit correlation has been proposed. • First results of explicitly-correlated ring-coupled-cluster theory are presented. • A double-hybrid functional with random-phase approximation is investigated. - Abstract: Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯benzene and pyrrole⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller–Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation.

  18. An accurate scheme to solve cluster dynamics equations using a Fokker-Planck approach

    Jourdan, Thomas; Legoll, Frédéric; Monasse, Laurent

    2016-01-01

    We present a numerical method to accurately simulate particle size distributions within the formalism of rate equation cluster dynamics. This method is based on a discretization of the associated Fokker-Planck equation. We show that particular care has to be taken to discretize the advection part of the Fokker-Planck equation, in order to avoid distortions of the distribution due to numerical diffusion. For this purpose we use the Kurganov-Noelle-Petrova scheme coupled with the monotonicity-preserving reconstruction MP5, which leads to very accurate results. The interest of the method is highlighted on the case of loop coarsening in aluminum. We show that the choice of the models to describe the energetics of loops does not significantly change the normalized loop distribution, while the choice of the models for the absorption coefficients seems to have a significant impact on it.

  19. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    Hrubý Jan

    2012-04-01

    Full Text Available Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  20. An accurate and efficient experimental approach for characterization of the complex oral microbiota

    Zheng, Wei; Tsompana, Maria; Ruscitto, Angela; Sharma, Ashu; Genco, Robert; Sun, Yijun; Buck, Michael J.

    2015-01-01

    Background Currently, taxonomic interrogation of microbiota is based on amplification of 16S rRNA gene sequences in clinical and scientific settings. Accurate evaluation of the microbiota depends heavily on the primers used, and genus/species resolution bias can arise with amplification of non-representative genomic regions. The latest Illumina MiSeq sequencing chemistry has extended the read length to 300 bp, enabling deep profiling of large number of samples in a single paired-end reaction ...

  1. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  2. Human Computer Interaction: An intellectual approach

    Mr. Kuntal Saroha; Sheela Sharma; Gurpreet Bhatia

    2011-01-01

    This paper discusses the research that has been done in thefield of Human Computer Interaction (HCI) relating tohuman psychology. Human-computer interaction (HCI) isthe study of how people design, implement, and useinteractive computer systems and how computers affectindividuals, organizations, and society. This encompassesnot only ease of use but also new interaction techniques forsupporting user tasks, providing better access toinformation, and creating more powerful forms ofcommunication. ...

  3. Inferring haplotypes at the NAT2 locus: the computational approach

    Sabbagh Audrey

    2005-06-01

    Full Text Available Abstract Background Numerous studies have attempted to relate genetic polymorphisms within the N-acetyltransferase 2 gene (NAT2 to interindividual differences in response to drugs or in disease susceptibility. However, genotyping of individuals single-nucleotide polymorphisms (SNPs alone may not always provide enough information to reach these goals. It is important to link SNPs in terms of haplotypes which carry more information about the genotype-phenotype relationship. Special analytical techniques have been designed to unequivocally determine the allocation of mutations to either DNA strand. However, molecular haplotyping methods are labour-intensive and expensive and do not appear to be good candidates for routine clinical applications. A cheap and relatively straightforward alternative is the use of computational algorithms. The objective of this study was to assess the performance of the computational approach in NAT2 haplotype reconstruction from phase-unknown genotype data, for population samples of various ethnic origin. Results We empirically evaluated the effectiveness of four haplotyping algorithms in predicting haplotype phases at NAT2, by comparing the results with those directly obtained through molecular haplotyping. All computational methods provided remarkably accurate and reliable estimates for NAT2 haplotype frequencies and individual haplotype phases. The Bayesian algorithm implemented in the PHASE program performed the best. Conclusion This investigation provides a solid basis for the confident and rational use of computational methods which appear to be a good alternative to infer haplotype phases in the particular case of the NAT2 gene, where there is near complete linkage disequilibrium between polymorphic markers.

  4. Physics and computer science: quantum computation and other approaches

    Salvador E. Venegas-Andraca

    2011-01-01

    This is a position paper written as an introduction to the special volume on quantum algorithms I edited for the journal Mathematical Structures in Computer Science (Volume 20 - Special Issue 06 (Quantum Algorithms), 2010).

  5. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  6. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    Jaco Prinsloo

    2016-06-01

    Full Text Available Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID technology in combination with GPS and the Global system for Mobile communication (GSM technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz. The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.

  7. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  8. A chemical approach to accurately characterize the coverage rate of gold nanoparticles

    Gold nanoparticles (AuNPs) have been widely used in many areas, and the nanoparticles usually have to be functionalized with some molecules before use. However, the information about the characterization of the functionalization of the nanoparticles is still limited or unclear, which has greatly restricted the better functionalization and application of AuNPs. Here, we propose a chemical way to accurately characterize the functionalization of AuNPs. Unlike the traditional physical methods, this method, which is based on the catalytic property of AuNPs, may give accurate coverage rate and some derivative information about the functionalization of the nanoparticles with different kinds of molecules. The performance of the characterization has been approved by adopting three independent molecules to functionalize AuNPs, including both covalent and non-covalent functionalization. Some interesting results are thereby obtained, and some are the first time to be revealed. The method may also be further developed as a useful tool for the characterization of a solid surface

  9. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  10. Cluster Computing: A Mobile Code Approach

    Patel, R. B.; Manpreet Singh

    2006-01-01

    Cluster computing harnesses the combined computing power of multiple processors in a parallel configuration. Cluster Computing environments built from commodity hardware have provided a cost-effective solution for many scientific and high-performance applications. In this paper we have presented design and implementation of a cluster based framework using mobile code. The cluster implementation involves the designing of a server named MCLUSTER which manages the configuring, resetting of clust...

  11. What is Computation: An Epistemic Approach

    Wiedermann, Jiří; van Leeuwen, J.

    Berlin: Springer, 2015 - (Italiano, G.; Margaria-Steffen, T.; Pokorný, J.; Quisquater, J.; Wattenhofer, R.), s. 1-13. (Lecture Notes in Computer Science. 8939). ISBN 978-3-662-46077-1. ISSN 0302-9743. [Sofsem 2015. International Conference on Current Trends in Theory and Practice of Computer Science /41./. Pec pod Sněžkou (CZ), 24.01.2015-29.01.2015] R&D Projects: GA ČR GAP202/10/1333 Institutional support: RVO:67985807 Keywords : computation * knowledge generation * information technology Subject RIV: IN - Informatics, Computer Science

  12. Accurate Waveforms for Non-spinning Binary Black Holes using the Effective-one-body Approach

    Buonanno, Alessandra; Pan, Yi; Baker, John G.; Centrella, Joan; Kelly, Bernard J.; McWilliams, Sean T.; vanMeter, James R.

    2007-01-01

    Using numerical relativity as guidance and the natural flexibility of the effective-one-body (EOB) model, we extend the latter so that it can successfully match the numerical relativity waveforms of non-spinning binary black holes during the last stages of inspiral, merger and ringdown. Here, by successfully, we mean with phase differences black-hole masses. The final black-hole mass and spin predicted by the numerical simulations are used to determine the ringdown frequency and decay time of three quasi-normal-mode damped sinusoids that are attached to the EOB inspiral-(plunge) waveform at the light-ring. The accurate EOB waveforms may be employed for coherent searches of gravitational waves emitted by non-spinning coalescing binary black holes with ground-based laser-interferometer detectors.

  13. Practically acquired and modified cone-beam computed tomography images for accurate dose calculation in head and neck cancer

    On-line cone-beam computed tomography (CBCT) may be used to reconstruct the dose for geometric changes of patients and tumors during radiotherapy course. This study is to establish a practical method to modify the CBCT for accurate dose calculation in head and neck cancer. Fan-beam CT (FBCT) and Elekta's CBCT were used to acquire images. The CT numbers for different materials on CBCT were mathematically modified to match them with FBCT. Three phantoms were scanned by FBCT and CBCT for image uniformity, spatial resolution, and CT numbers, and to compare the dose distribution from orthogonal beams. A Rando phantom was scanned and planned with intensity-modulated radiation therapy (IMRT). Finally, two nasopharyngeal cancer patients treated with IMRT had their CBCT image sets calculated for dose comparison. With 360 acquisition of CBCT and high-resolution reconstruction, the uniformity of CT number distribution was improved and the otherwise large variations for background and high-density materials were reduced significantly. The dose difference between FBCT and CBCT was < 2% in phantoms. In the Rando phantom and the patients, the dose-volume histograms were similar. The corresponding isodose curves covering ≥ 90% of prescribed dose on FBCT and CBCT were close to each other (within 2 mm). Most dosimetric differences were from the setup errors related to the interval changes in body shape and tumor response. The specific CBCT acquisition, reconstruction, and CT number modification can generate accurate dose calculation for the potential use in adaptive radiotherapy.

  14. Practically acquired and modified cone-beam computed tomography images for accurate dose calculation in head and neck cancer

    Hu, Chih-Chung [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; Yuanpei Univ., Hsinchu (China). Dept. of Radiological Technology; Huang, Wen-Tao [Yuanpei Univ., Hsinchu (China). Dept. of Radiological Technology; Tsai, Chiao-Ling; Chao, Hsiao-Ling; Huang, Guo-Ming; Wang, Chun-Wei [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; Wu, Jian-Kuen [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; National Taiwan Normal Univ., Taipei (China). Inst. of Electro-Optical Science and Technology; Wu, Chien-Jang [National Taiwan Normal Univ., Taipei (China). Inst. of Electro-Optical Science and Technology; Cheng, Jason Chia-Hsien [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; National Taiwan Univ. Taipei (China). Graduate Inst. of Oncology; National Taiwan Univ. Taipei (China). Graduate Inst. of Clinical Medicine; National Taiwan Univ. Taipei (China). Graduate Inst. of Biomedical Electronics and Bioinformatics

    2011-10-15

    On-line cone-beam computed tomography (CBCT) may be used to reconstruct the dose for geometric changes of patients and tumors during radiotherapy course. This study is to establish a practical method to modify the CBCT for accurate dose calculation in head and neck cancer. Fan-beam CT (FBCT) and Elekta's CBCT were used to acquire images. The CT numbers for different materials on CBCT were mathematically modified to match them with FBCT. Three phantoms were scanned by FBCT and CBCT for image uniformity, spatial resolution, and CT numbers, and to compare the dose distribution from orthogonal beams. A Rando phantom was scanned and planned with intensity-modulated radiation therapy (IMRT). Finally, two nasopharyngeal cancer patients treated with IMRT had their CBCT image sets calculated for dose comparison. With 360 acquisition of CBCT and high-resolution reconstruction, the uniformity of CT number distribution was improved and the otherwise large variations for background and high-density materials were reduced significantly. The dose difference between FBCT and CBCT was < 2% in phantoms. In the Rando phantom and the patients, the dose-volume histograms were similar. The corresponding isodose curves covering {>=} 90% of prescribed dose on FBCT and CBCT were close to each other (within 2 mm). Most dosimetric differences were from the setup errors related to the interval changes in body shape and tumor response. The specific CBCT acquisition, reconstruction, and CT number modification can generate accurate dose calculation for the potential use in adaptive radiotherapy.

  15. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  16. Computer science approach to quantum control

    Janzing, Dominik

    2006-01-01

    This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. It shows that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer.

  17. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS

    Zhibin Miao

    2015-12-01

    Full Text Available With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.

  18. An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).

    Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert

    2015-08-01

    The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255

  19. Ring polymer molecular dynamics fast computation of rate coefficients on accurate potential energy surfaces in local configuration space: Application to the abstraction of hydrogen from methane

    Meng, Qingyong; Chen, Jun; Zhang, Dong H.

    2016-04-01

    To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3 reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ˜20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.

  20. DISTRIBUTED COMPUTING APPROACHES FOR SCALABILITY AND HIGH PERFORMANCE

    MANJULA K A

    2010-06-01

    Full Text Available Distributed computing is a science which solves a large problem by giving small parts of the problem to many computers to solve and then combining the solutions for the parts into a solution for the problem. This distributed computing framework suits to projects, which have an insatiable appetite for computing power. Two such popular projects are SETI@Home and Folding@Home. Different architectures and approaches for distributed computing are being proposed as part of the works progressing around the world. One way ofdistributing both data and computing power, known as grid computing, taps the Internet to put petabyte processing on every researcher's desktop. Grid technology is finding its way out of the academic incubator and entering into commercial environments. Cloud computing, which is a variant to grid computing, has emerged as a potentially competing approach for architecting large distributed systems. Clouds can be viewed as a logical and next higher-level abstraction from Grids.

  1. Assessing creativity in computer music ensembles: a computational approach

    Comajuncosas, Josep M.

    2016-01-01

    Over the last decade Laptop Orchestras and Mobile Ensembles have proliferated. As a result, a large body of research has arisen on infrastructure, evaluation, design principles and compositional methodologies for Computer Music Ensembles (CME). However, little has been addressed and very little is known about the challenges and opportunities provided by CMEs for creativity in musical performance. Therefore, one of the most common issues CMEs have to deal with is the lack of ...

  2. Human Computer Interaction: An intellectual approach

    Kuntal Saroha

    2011-08-01

    Full Text Available This paper discusses the research that has been done in thefield of Human Computer Interaction (HCI relating tohuman psychology. Human-computer interaction (HCI isthe study of how people design, implement, and useinteractive computer systems and how computers affectindividuals, organizations, and society. This encompassesnot only ease of use but also new interaction techniques forsupporting user tasks, providing better access toinformation, and creating more powerful forms ofcommunication. It involves input and output devices andthe interaction techniques that use them; how information ispresented and requested; how the computer’s actions arecontrolled and monitored; all forms of help, documentation,and training; the tools used to design, build, test, andevaluate user interfaces; and the processes that developersfollow when creating Interfaces.

  3. Uncertainty in biology: a computational modeling approach

    2015-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building...

  4. NETWORK SECURITY: AN APPROACH TOWARDS SECURE COMPUTING

    Rahul Pareek

    2011-01-01

    The security of computer networks plays a strategic role in modern computer systems. In order to enforce high protection levels against malicious attack, a number of software tools have been currently developed. Intrusion Detection System has recently become a heated research topic due to its capability of detecting and preventing the attacks from malicious network users. A pattern matching IDS for network security has been proposed in this paper. Many network security applications...

  5. Mobile Cloud Computing: A Review on Smartphone Augmentation Approaches

    Abolfazli, Saeid; Sanaei, Zohreh; Gani, Abdullah

    2012-01-01

    Smartphones have recently gained significant popularity in heavy mobile processing while users are increasing their expectations toward rich computing experience. However, resource limitations and current mobile computing advancements hinder this vision. Therefore, resource-intensive application execution remains a challenging task in mobile computing that necessitates device augmentation. In this article, smartphone augmentation approaches are reviewed and classified in two main groups, name...

  6. Computational approaches to enhance mass spectrometry-based proteomics

    Neuhauser, Nadin

    2014-01-01

    In this thesis I present three computational approaches that improve the analysis of mass spectrometry-based proteomics data. The novel search engine Andromeda allows efficient identification of peptides and proteins. Implementation of a rule-based expert system provides more detailed information contained in the mass spectra. Furthermore I adapted our computational proteomics pipeline to high performance computers.

  7. Computational dynamics for robotics systems using a non-strict computational approach

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  8. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  9. Machine learning and synthetic aperture refocusing approach for more accurate masking of fish bodies in 3D PIV data

    Ford, Logan; Bajpayee, Abhishek; Techet, Alexandra

    2015-11-01

    3D particle image velocimetry (PIV) is becoming a popular technique to study biological flows. PIV images that contain fish or other animals around which flow is being studied, need to be appropriately masked in order to remove the animal body from the 3D reconstructed volumes prior to calculating particle displacement vectors. Presented here is a machine learning and synthetic aperture (SA) refocusing based approach for more accurate masking of fish from reconstructed intensity fields for 3D PIV purposes. Using prior knowledge about the 3D shape and appearance of the fish along with SA refocused images at arbitrarily oriented focal planes, the location and orientation of a fish in a reconstructed volume can be accurately determined. Once the location and orientation of a fish in a volume is determined, it can be masked out.

  10. A false sense of security? Can tiered approach be trusted to accurately classify immunogenicity samples?

    Jaki, Thomas; Allacher, Peter; Horling, Frank

    2016-09-01

    Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. PMID:27262992

  11. A novel approach for accurate radiative transfer in cosmological hydrodynamic simulations

    Petkova, Margarita

    2010-01-01

    We present a numerical implementation of radiative transfer based on an explicitly photon-conserving advection scheme, where radiative fluxes over the cell interfaces of a structured or unstructured mesh are calculated with a second-order reconstruction of the intensity field. The approach employs a direct discretisation of the radiative transfer equation in Boltzmann form with adjustable angular resolution that in principle works equally well in the optically thin and optically thick regimes. In our most general formulation of the scheme, the local radiation field is decomposed into a linear sum of directional bins of equal solid-angle, tessellating the unit sphere. Each of these "cone-fields" is transported independently, with constant intensity as a function of direction within the cone. Photons propagate at the speed of light (or optionally using a reduced speed of light approximation to allow larger timesteps), yielding a fully time-dependent solution of the radiative transfer equation that can naturally...

  12. Human brain mapping: Experimental and computational approaches

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J. [Los Alamos National Lab., NM (US); Sanders, J. [Albuquerque VA Medical Center, NM (US); Belliveau, J. [Massachusetts General Hospital, Boston, MA (US)

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  13. Uncertainty in biology a computational modeling approach

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  14. Heterogeneous Computing in Economics: A Simplified Approach

    Dziubinski, Matt P.; Grassi, Stefano

    This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of the C++ Accelerated Massive Parallelism recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (2011) we document a...

  15. Heterogeneous Computing in Economics: A Simplified Approach

    Dziubinski, Matt P.; Grassi, Stefano

    2012-01-01

    This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of the C++ Accelerated Massive Parallelism recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (2011) we document a speed gain together with a simplified programming style that naturally enables parallelization.

  16. Cluster Computing: A Mobile Code Approach

    R. B. Patel

    2006-01-01

    Full Text Available Cluster computing harnesses the combined computing power of multiple processors in a parallel configuration. Cluster Computing environments built from commodity hardware have provided a cost-effective solution for many scientific and high-performance applications. In this paper we have presented design and implementation of a cluster based framework using mobile code. The cluster implementation involves the designing of a server named MCLUSTER which manages the configuring, resetting of cluster. It allows a user to provide necessary information regarding the application to be executed via a graphical user interface (GUI. Framework handles- the generation of application mobile code and its distribution to appropriate client nodes, efficient handling of results so generated and communicated by a number of client nodes and recording of execution time of application. The client node receives and executes the mobile code that defines the distributed job submitted by MCLUSTER server and replies the results back. We have also the analyzed the performance of the developed system emphasizing the tradeoff between communication and computation overhead.

  17. LOCUSTRA: accurate prediction of local protein structure using a two-layer support vector machine approach.

    Zimmermann, Olav; Hansmann, Ulrich H E

    2008-09-01

    Constraint generation for 3d structure prediction and structure-based database searches benefit from fine-grained prediction of local structure. In this work, we present LOCUSTRA, a novel scheme for the multiclass prediction of local structure that uses two layers of support vector machines (SVM). Using a 16-letter structural alphabet from de Brevern et al. (Proteins: Struct., Funct., Bioinf. 2000, 41, 271-287), we assess its prediction ability for an independent test set of 222 proteins and compare our method to three-class secondary structure prediction and direct prediction of dihedral angles. The prediction accuracy is Q16=61.0% for the 16 classes of the structural alphabet and Q3=79.2% for a simple mapping to the three secondary classes helix, sheet, and coil. We achieve a mean phi(psi) error of 24.74 degrees (38.35 degrees) and a median RMSDA (root-mean-square deviation of the (dihedral) angles) per protein chain of 52.1 degrees. These results compare favorably with related approaches. The LOCUSTRA web server is freely available to researchers at http://www.fz-juelich.de/nic/cbb/service/service.php. PMID:18763837

  18. High order accurate and low dissipation method for unsteady compressible viscous flow computation on helicopter rotor in forward flight

    Xu, Li; Weng, Peifen

    2014-02-01

    An improved fifth-order weighted essentially non-oscillatory (WENO-Z) scheme combined with the moving overset grid technique has been developed to compute unsteady compressible viscous flows on the helicopter rotor in forward flight. In order to enforce periodic rotation and pitching of the rotor and relative motion between rotor blades, the moving overset grid technique is extended, where a special judgement standard is presented near the odd surface of the blade grid during search donor cells by using the Inverse Map method. The WENO-Z scheme is adopted for reconstructing left and right state values with the Roe Riemann solver updating the inviscid fluxes and compared with the monotone upwind scheme for scalar conservation laws (MUSCL) and the classical WENO scheme. Since the WENO schemes require a six point stencil to build the fifth-order flux, the method of three layers of fringes for hole boundaries and artificial external boundaries is proposed to carry out flow information exchange between chimera grids. The time advance on the unsteady solution is performed by the full implicit dual time stepping method with Newton type LU-SGS subiteration, where the solutions of pseudo steady computation are as the initial fields of the unsteady flow computation. Numerical results on non-variable pitch rotor and periodic variable pitch rotor in forward flight reveal that the approach can effectively capture vortex wake with low dissipation and reach periodic solutions very soon.

  19. Computational approach for a pair of bubble coalescence process

    Nurul Hasan, E-mail: nurul_hasan@petronas.com.my [Department of Chemical Engineering, Universiti Teknologi Petronas, Bandar Seri Iskandar, Perak 31750 (Malaysia); Zalinawati binti Zakaria [Department of Chemical Engineering, Universiti Teknologi Petronas, Bandar Seri Iskandar, Perak 31750 (Malaysia)

    2011-06-15

    The coalescence of bubbles has great value in mineral recovery and oil industry. In this paper, two co-axial bubbles rising in a cylinder is modelled to study the coalescence of bubbles for four computational experimental test cases. The Reynolds' (Re) number is chosen in between 8.50 and 10, Bond number, Bo {approx}4.25-50, Morton number, M 0.0125-14.7. The viscosity ratio ({mu}{sub r}) and density ratio ({rho}{sub r}) of liquid to bubble are kept constant (100 and 850 respectively). It was found that the Bo number has significant effect on the coalescence process for constant Re, {mu}{sub r} and {rho}{sub r}. The bubble-bubble distance over time was validated against published experimental data. The results show that VOF approach can be used to model these phenomena accurately. The surface tension was changed to alter the Bo and density of the fluids to alter the Re and M, keeping the {mu}{sub r} and {rho}{sub r} the same. It was found that for lower Bo, the bubble coalesce is slower and the pocket at the lower part of the leading bubble is less concave (towards downward) which is supported by the experimental data.

  20. A simple, efficient, and high-order accurate curved sliding-mesh interface approach to spectral difference method on coupled rotating and stationary domains

    Zhang, Bin; Liang, Chunlei

    2015-08-01

    This paper presents a simple, efficient, and high-order accurate sliding-mesh interface approach to the spectral difference (SD) method. We demonstrate the approach by solving the two-dimensional compressible Navier-Stokes equations on quadrilateral grids. This approach is an extension of the straight mortar method originally designed for stationary domains [7,8]. Our sliding method creates curved dynamic mortars on sliding-mesh interfaces to couple rotating and stationary domains. On the nonconforming sliding-mesh interfaces, the related variables are first projected from cell faces to mortars to compute common fluxes, and then the common fluxes are projected back from the mortars to the cell faces to ensure conservation. To verify the spatial order of accuracy of the sliding-mesh spectral difference (SSD) method, both inviscid and viscous flow cases are tested. It is shown that the SSD method preserves the high-order accuracy of the SD method. Meanwhile, the SSD method is found to be very efficient in terms of computational cost. This novel sliding-mesh interface method is very suitable for parallel processing with domain decomposition. It can be applied to a wide range of problems, such as the hydrodynamics of marine propellers, the aerodynamics of rotorcraft, wind turbines, and oscillating wing power generators, etc.

  1. Computational Approach To Understanding Autism Spectrum Disorders

    Włodzisław Duch

    2012-01-01

    Full Text Available Every year the prevalence of Autism Spectrum of Disorders (ASD is rising. Is there a unifying mechanism of various ASD cases at the genetic, molecular, cellular or systems level? The hypothesis advanced in this paper is focused on neural dysfunctions that lead to problems with attention in autistic people. Simulations of attractor neural networks performing cognitive functions help to assess system long-term neurodynamics. The Fuzzy Symbolic Dynamics (FSD technique is used for the visualization of attractors in the semantic layer of the neural model of reading. Large-scale simulations of brain structures characterized by a high order of complexity requires enormous computational power, especially if biologically motivated neuron models are used to investigate the influence of cellular structure dysfunctions on the network dynamics. Such simulations have to be implemented on computer clusters in a grid-based architectures

  2. Music Genre Classification Systems - A Computational Approach

    Ahrendt, Peter; Hansen, Lars Kai

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular...

  3. A fourth-order accurate curvature computation in a level set framework for two-phase flows subjected to surface tension forces

    Coquerelle, Mathieu; Glockner, Stéphane

    2016-01-01

    We propose an accurate and robust fourth-order curvature extension algorithm in a level set framework for the transport of the interface. The method is based on the Continuum Surface Force approach, and is shown to efficiently calculate surface tension forces for two-phase flows. In this framework, the accuracy of the algorithms mostly relies on the precise computation of the surface curvature which we propose to accomplish using a two-step algorithm: first by computing a reliable fourth-order curvature estimation from the level set function, and second by extending this curvature rigorously in the vicinity of the surface, following the Closest Point principle. The algorithm is easy to implement and to integrate into existing solvers, and can easily be extended to 3D. We propose a detailed analysis of the geometrical and numerical criteria responsible for the appearance of spurious currents, a well known phenomenon observed in various numerical frameworks. We study the effectiveness of this novel numerical method on state-of-the-art test cases showing that the resulting curvature estimate significantly reduces parasitic currents. In addition, the proposed approach converges to fourth-order regarding spatial discretization, which is two orders of magnitude better than algorithms currently available. We also show the necessity for high-order transport methods for the surface by studying the case of the 2D advection of a column at equilibrium thereby proving the robustness of the proposed approach. The algorithm is further validated on more complex test cases such as a rising bubble.

  4. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic age

  5. A new approach to accurate validation of remote sensing retrieval of evapotranspiration based on data fusion

    C. Sun

    2010-03-01

    obtained from RS retrieval, which was in accordance with previous studies (Jamieson, 1982; Dugas and Ainsworth, 1985; Benson et al., 1992; Pereira and Nova, 1992.

    After the data fusion, the correlation (R2=0.8516 between the monthly runoff obtained from the simulation based on ET retrieval and the observed data was higher than that (R2=0.8411 between the data obtained from the PM-based ET simulation and the observed data. As for the RMSE, the result (RMSE=26.0860 between the simulated runoff based on ET retrieval and the observed data was also superior to the result (RMSE=35.71904 between the simulated runoff obtained with PM-based ET and the observed data. As for the MBE parameter, the result (MBE=−8.6578 for the RS retrieval method was obviously better than that (MBE=−22.7313 for the PM-based method. The comparison of them showed that the RS retrieval had better adaptivity and higher accuracy than the PM-based method, and the new approach based on data fusion and the distributed hydrological model was feasible, reliable and worth being studied further.

  6. SOFT COMPUTING APPROACH FOR NOISY IMAGE RESTORATION

    2000-01-01

    A genetic learning algorithm based fuzzy neural network was proposed for noisy image restoration, which can adaptively find and extract the fuzzy rules contained in noise. It can efficiently remove image noise and preserve the detail image information as much as possible. The experimental results show that the proposed approach is able to performa far better than conventional noise removing techniques.

  7. A complex network approach to cloud computing

    Travieso, Gonzalo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2015-01-01

    Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the users' tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlain by Erdos-Renyi and Barabasi-Albert topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of two indices: the cost of communication between the user and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter index, the ER topology provides better performance than the BA case for smaller average degrees and opposite behavior for larger average degrees. With respect to the cost, smaller values are found in the BA ...

  8. Approach for Application on Cloud Computing

    Shiv Kumar

    2012-08-01

    Full Text Available A web application is any application using web browser asclient or we can say that it is a dynamic version of a web orapplication server. There are two types of web applicationsbased on orientation:1. A presentation-oriented web application generates interactiveweb pages containing various types of markup language likeHTML, XML etc. and dynamic content in response to requests.2. A service-oriented web application implements the endpointof a web service.Web applications commonly use server-side script like ASP,PHP, etc and client-side script like HTML, JavaScript, etc. todevelop the application. Web applications are used in the fieldof banking sector, insurance sector, marketing, finance, servicesetc.“Cloud computing is a model for enabling convenient, ondemandnetwork access to a shared pool of configurablecomputing resources (e.g., networks, servers, storage,applications, and services that can be rapidly provisioned andreleased with minimal management effort or service providerinteraction.” - U.S. National Institute of Standards andTechnology (NISTA general and simple cloud computing definition is using webapplications and/or server services that you pay to access ratherthan software or hardware that you buy and install.

  9. Integrated Computable General Equilibrium (CGE) microsimulation approach

    John Cockburn; Erwin Corong; Caesar Cororaton

    2010-01-01

    Conventionally, the analysis of macro-economic shocks and the analysis of income distribution and poverty require very different methodological techniques and sources of data. Over the last decade however, the natural divide between both approaches has diminished, as evaluating the impact of macro-economic shocks on poverty and income distribution within a CGE framework complemented by household survey data has flourished. This paper focuses on explicitly integrating into a CGE model each hou...

  10. Computing Greeks for L\\'evy Models: The Fourier Transform Approach

    Federico De Olivera; Ernesto Mordecki

    2014-01-01

    The computation of Greeks for exponential L\\'evy models are usually approached by Malliavin Calculus and other methods, as the Likelihood Ratio and the finite difference method. In this paper we obtain exact formulas for Greeks of European options based on the Lewis formula for the option value. Therefore, it is possible to obtain accurate approximations using Fast Fourier Transform. We will present an exhaustive development of Greeks for Call options. The error is shown for all Greeks in the...

  11. Panel discussion: Innovative approaches to high performance computing

    A large part of research in lattice field theory is carried out via computer simulations. Some research groups use computer clusters readily assembled using off-the-shelf components, while others have been developing dedicated closely coupled massively parallel supercomputers. Pros and cons of these approaches, in particular the affordability and performance of these computers, were discussed. All the options being explored have different specific uses, and it is a good sign for the future that the computer industry is now taking active interest in building special purpose high performance computers

  12. Securing applications in personal computers: the relay race approach.

    Wright, James Michael

    1991-01-01

    Approved for public release; distribution is unlimited This Thesis reviews the increasing need for security in a personal computer (PC) environment and proposes a new approach for securing PC applications at the application layer. The Relay Race Approach extends two standard approaches : data encryption and password access control at the main program level, to the subprogram level by the use of a special parameter, the "Baton" . The applicability of this approach is de...

  13. Unbiased QM/MM approach using accurate multipoles from a linear scaling DFT calculation with a systematic basis set

    Mohr, Stephan; Genovese, Luigi; Ratcliff, Laura; Masella, Michel

    The quantum mechanics/molecular mechanis (QM/MM) method is a popular approach that allows to perform atomistic simulations using different levels of accuracy. Since only the essential part of the simulation domain is treated using a highly precise (but also expensive) QM method, whereas the remaining parts are handled using a less accurate level of theory, this approach allows to considerably extend the total system size that can be simulated without a notable loss of accuracy. In order to couple the QM and MM regions we use an approximation of the electrostatic potential based on a multipole expansion. The multipoles of the QM region are determined based on the results of a linear scaling Density Functional Theory (DFT) calculation using a set of adaptive, localized basis functions, as implemented within the BigDFT software package. As this determination comes at virtually no extra cost compared to the QM calculation, the coupling between QM and MM region can be done very efficiently. In this presentation I will demonstrate the accuracy of both the linear scaling DFT approach itself as well as of the approximation of the electrostatic potential based on the multipole expansion, and show some first QM/MM applications using the aforementioned approach.

  14. Fractal approach to computer-analytical modelling of tree crown

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  15. Q-P Wave traveltime computation by an iterative approach

    Ma, Xuxin

    2013-01-01

    In this work, we present a new approach to compute anisotropic traveltime based on solving successively elliptical isotropic traveltimes. The method shows good accuracy and is very simple to implement.

  16. Computing material fronts with a Lagrange-Projection approach

    Chalons, Christophe

    2010-01-01

    This paper reports investigations on the computation of material fronts in multi-fluid models using a Lagrange-Projection approach. Various forms of the Projection step are considered. Particular attention is paid to minimization of conservation errors.

  17. A genetic and computational approach to structurally classify neuronal types

    Sümbül, Uygar; Song, Sen; McCulloch, Kyle; Becker, Michael; Lin, Bin; Sanes, Joshua R.; Masland, Richard H.; Seung, H. Sebastian

    2014-01-01

    The importance of cell types in understanding brain function is widely appreciated but only a tiny fraction of neuronal diversity has been catalogued. Here, we exploit recent progress in genetic definition of cell types in an objective structural approach to neuronal classification. The approach is based on highly accurate quantification of dendritic arbor position relative to neurites of other cells. We test the method on a population of 363 mouse retinal ganglion cells. For each cell, we de...

  18. Assessment of the extended Koopmans' theorem for the chemical reactivity: Accurate computations of chemical potentials, chemical hardnesses, and electrophilicity indices.

    Yildiz, Dilan; Bozkaya, Uğur

    2016-01-30

    The extended Koopmans' theorem (EKT) provides a straightforward way to compute ionization potentials and electron affinities from any level of theory. Although it is widely applied to ionization potentials, the EKT approach has not been applied to evaluation of the chemical reactivity. We present the first benchmarking study to investigate the performance of the EKT methods for predictions of chemical potentials (μ) (hence electronegativities), chemical hardnesses (η), and electrophilicity indices (ω). We assess the performance of the EKT approaches for post-Hartree-Fock methods, such as Møller-Plesset perturbation theory, the coupled-electron pair theory, and their orbital-optimized counterparts for the evaluation of the chemical reactivity. Especially, results of the orbital-optimized coupled-electron pair theory method (with the aug-cc-pVQZ basis set) for predictions of the chemical reactivity are very promising; the corresponding mean absolute errors are 0.16, 0.28, and 0.09 eV for μ, η, and ω, respectively. PMID:26458329

  19. Soft computing approaches to uncertainty propagation in environmental risk mangement

    Kumar, Vikas

    2008-01-01

    Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like pro...

  20. A computational approach to chemical etiologies of diabetes

    Audouze, Karine Marie Laure; Brunak, Søren; Grandjean, Philippe

    2013-01-01

    Computational meta-analysis can link environmental chemicals to genes and proteins involved in human diseases, thereby elucidating possible etiologies and pathogeneses of non-communicable diseases. We used an integrated computational systems biology approach to examine possible pathogenetic......, and offers thus promising guidance for future research in regard to the etiology and pathogenesis of complex diseases....

  1. General approaches in ensemble quantum computing

    V Vimalan; N Chandrakumar

    2008-01-01

    We have developed methodology for NMR quantum computing focusing on enhancing the efficiency of initialization, of logic gate implementation and of readout. Our general strategy involves the application of rotating frame pulse sequences to prepare pseudopure states and to perform logic operations. We demonstrate experimentally our methodology for both homonuclear and heteronuclear spin ensembles. On model two-spin systems, the initialization time of one of our sequences is three-fourths (in the heteronuclear case) or one-fourth (in the homonuclear case), of the typical pulsed free precession sequences, attaining the same initialization efficiency. We have implemented the logical SWAP operation in homonuclear AMX spin systems using selective isotropic mixing, reducing the duration taken to a third compared to the standard re-focused INEPT-type sequence. We introduce the 1D version for readout of the rotating frame SWAP operation, in an attempt to reduce readout time. We further demonstrate the Hadamard mode of 1D SWAP, which offers 2N-fold reduction in experiment time for a system with -working bits, attaining the same sensitivity as the standard 1D version.

  2. Multivariate analysis: A statistical approach for computations

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  3. Accurate quantification of endogenous androgenic steroids in cattle's meat by gas chromatography mass spectrometry using a surrogate analyte approach

    Determination of endogenous steroids in complex matrices such as cattle's meat is a challenging task. Since endogenous steroids always exist in animal tissues, no analyte-free matrices for constructing the standard calibration line will be available, which is crucial for accurate quantification specially at trace level. Although some methods have been proposed to solve the problem, none has offered a complete solution. To this aim, a new quantification strategy was developed in this study, which is named 'surrogate analyte approach' and is based on using isotope-labeled standards instead of natural form of endogenous steroids for preparing the calibration line. In comparison with the other methods, which are currently in use for the quantitation of endogenous steroids, this approach provides improved simplicity and speed for analysis on a routine basis. The accuracy of this method is better than other methods at low concentration and comparable to the standard addition at medium and high concentrations. The method was also found to be valid according to the ICH criteria for bioanalytical methods. The developed method could be a promising approach in the field of compounds residue analysis

  4. Numerical Methods for Stochastic Computations A Spectral Method Approach

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  5. Mutations that Cause Human Disease: A Computational/Experimental Approach

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which

  6. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. PMID:22658682

  7. Mobile Cloud Computing: A Review on Smartphone Augmentation Approaches

    Abolfazli, Saeid; Gani, Abdullah

    2012-01-01

    Smartphones have recently gained significant popularity in heavy mobile processing while users are increasing their expectations toward rich computing experience. However, resource limitations and current mobile computing advancements hinder this vision. Therefore, resource-intensive application execution remains a challenging task in mobile computing that necessitates device augmentation. In this article, smartphone augmentation approaches are reviewed and classified in two main groups, namely hardware and software. Generating high-end hardware is a subset of hardware augmentation approaches, whereas conserving local resource and reducing resource requirements approaches are grouped under software augmentation methods. Our study advocates that consreving smartphones' native resources, which is mainly done via task offloading, is more appropriate for already-developed applications than new ones, due to costly re-development process. Cloud computing has recently obtained momentous ground as one of the major co...

  8. What is intrinsic motivation? A typology of computational approaches

    Pierre-Yves Oudeyer

    2009-11-01

    Full Text Available Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics.

  9. Convergence Analysis of a Class of Computational Intelligence Approaches

    Junfeng Chen

    2013-01-01

    Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.

  10. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  11. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...... thinking. We present two main theses on which the subject is based, and we present the included knowledge areas and didactical design principles. Finally we summarize the status and future plans for the subject and related development projects....

  12. An approach to computing direction relations between separated object groups

    H. Yan

    2013-06-01

    Full Text Available Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on Gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups; and then it constructs the Voronoi Diagram between the two groups using the triangular network; after this, the normal of each Vornoi edge is calculated, and the quantitative expression of the direction relations is constructed; finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  13. Computational experiment approach to advanced secondary mathematics curriculum

    Abramovich, Sergei

    2014-01-01

    This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...

  14. An approach to computing direction relations between separated object groups

    Yan, H.; Wang, Z.; Li, J.

    2013-09-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  15. A tale of three bio-inspired computational approaches

    Schaffer, J. David

    2014-05-01

    I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.

  16. A GPU-Computing Approach to Solar Stokes Profile Inversion

    Harker, Brian J.; Mighell, Kenneth J.

    2012-01-01

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS (GENEtic Stokes Inversion Strategy), employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units GPUs, along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specif...

  17. Computational approaches for the study of biotechnologically-relevant macromolecules.

    Filippi, G.

    2016-01-01

    Computer-based techniques have become especially important in molecular biology, since they often represent the only viable way to understand some phenomena at atomic and molecular level. The complexity of biological systems, which usually needs to be analyzed with different levels of accuracy, requires the application of different approaches. Computational methodologies applied to biotechnologies allow a molecular comprehension of biological systems at different levels of depth. Quantum mech...

  18. Distributed computer-controlled systems: the DEAR-COTS approach

    Veríssimo, P; A. Casimiro; L. M. Pinho; vasques, f; Rodrigues, L.; E. Tovar

    2000-01-01

    This paper proposes a new architecture targeting real-time and reliable Distributed Computer-Controlled Systems (DCCS). This architecture provides a structured approach for the integration of soft and/or hard real-time applications with Commercial O -The-Shelf (COTS) components. The Timely Computing Base model is used as the reference model to deal with the heterogeneity of system components with respect to guaranteeing the timeliness of applications. The reliability and ava...

  19. Computational biomechanics for medicine new approaches and new applications

    Miller, Karol; Wittek, Adam; Nielsen, Poul

    2015-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologiesand advancements. Thisvolumecomprises twelve of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, France, Spain and Switzerland. Some of the interesting topics discussed are:real-time simulations; growth and remodelling of soft tissues; inverse and meshless solutions; medical image analysis; and patient-specific solid mechanics simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  20. A computational approach to evaluate the androgenic affinity of iprodione, procymidone, vinclozolin and their metabolites.

    Galli, Corrado Lodovico; Sensi, Cristina; Fumagalli, Amos; Parravicini, Chiara; Marinovich, Marina; Eberini, Ivano

    2014-01-01

    Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs) and their metabolites towards the ligand binding domain (LBD) of the androgen receptor (AR) in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites) were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three). This evidence suggests that, in order not to over-/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy. PMID:25111804

  1. A Computationally Based Approach to Homogenizing Advanced Alloys

    Jablonski, P D; Cowen, C J

    2011-02-27

    We have developed a computationally based approach to optimizing the homogenization heat treatment of complex alloys. The Scheil module within the Thermo-Calc software is used to predict the as-cast segregation present within alloys, and DICTRA (Diffusion Controlled TRAnsformations) is used to model the homogenization kinetics as a function of time, temperature and microstructural scale. We will discuss this approach as it is applied to both Ni based superalloys as well as the more complex (computationally) case of alloys that solidify with more than one matrix phase as a result of segregation. Such is the case typically observed in martensitic steels. With these alloys it is doubly important to homogenize them correctly, especially at the laboratory scale, since they are austenitic at high temperature and thus constituent elements will diffuse slowly. The computationally designed heat treatment and the subsequent verification real castings are presented.

  2. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods. PMID:25068299

  3. An engineering based approach for hydraulic computations in river flows

    Di Francesco, S.; Biscarini, C.; Pierleoni, A.; Manciola, P.

    2016-06-01

    This paper presents an engineering based approach for hydraulic risk evaluation. The aim of the research is to identify a criteria for the choice of the simplest and appropriate model to use in different scenarios varying the characteristics of main river channel. The complete flow field, generally expressed in terms of pressure, velocities, accelerations can be described through a three dimensional approach that consider all the flow properties varying in all directions. In many practical applications for river flow studies, however, the greatest changes occur only in two dimensions or even only in one. In these cases the use of simplified approaches can lead to accurate results, with easy to build and faster simulations. The study has been conducted taking in account a dimensionless parameter of channels (ratio of curvature radius and width of the channel (R/B).

  4. A GPU-Computing Approach to Solar Stokes Profile Inversion

    Harker, Brian J

    2012-01-01

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS (GENEtic Stokes Inversion Strategy), employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units GPUs, along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disc maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel genetic algorithm with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disc vector ma...

  5. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes.

    Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad

    2012-10-30

    Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD(50) with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure-toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model. PMID:22959133

  6. An ONIOM study of the Bergman reaction: a computationally efficient and accurate method for modeling the enediyne anticancer antibiotics

    Feldgus, Steven; Shields, George C.

    2001-10-01

    The Bergman cyclization of large polycyclic enediyne systems that mimic the cores of the enediyne anticancer antibiotics was studied using the ONIOM hybrid method. Tests on small enediynes show that ONIOM can accurately match experimental data. The effect of the triggering reaction in the natural products is investigated, and we support the argument that it is strain effects that lower the cyclization barrier. The barrier for the triggered molecule is very low, leading to a reasonable half-life at biological temperatures. No evidence is found that would suggest a concerted cyclization/H-atom abstraction mechanism is necessary for DNA cleavage.

  7. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  8. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    Keshavarz, Mohammad Hossein, E-mail: mhkeshavarz@mut-es.ac.ir [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115, Isfahan, Islamic Republic of Iran (Iran, Islamic Republic of); Gharagheizi, Farhad [Department of Chemical Engineering, Buinzahra Branch, Islamic Azad University, Buinzahra, Islamic Republic of Iran (Iran, Islamic Republic of); Shokrolahi, Arash; Zakinejad, Sajjad [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115, Isfahan, Islamic Republic of Iran (Iran, Islamic Republic of)

    2012-10-30

    Highlights: Black-Right-Pointing-Pointer A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. Black-Right-Pointing-Pointer There is no need to use QSAR and QSTR methods, which are based on computer codes. Black-Right-Pointing-Pointer The predicted results of 58 compounds are more reliable than those predicted by QSTR method. Black-Right-Pointing-Pointer The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD{sub 50} with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure-toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  9. Computer vision approaches to medical image analysis. Revised papers

    This book constitutes the thoroughly refereed post proceedings of the international workshop Computer Vision Approaches to Medical Image Analysis, CVAMIA 2006, held in Graz, Austria in May 2006 as a satellite event of the 9th European Conference on Computer Vision, EECV 2006. The 10 revised full papers and 11 revised poster papers presented together with 1 invited talk were carefully reviewed and selected from 38 submissions. The papers are organized in topical sections on clinical applications, image registration, image segmentation and analysis, and the poster session. (orig.)

  10. Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact

    Abadjiev Valentin

    2014-09-01

    Full Text Available The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters.

  11. Transparency and deliberation within the FOMC: a computational linguistics approach

    Hansen, Stephen; McMahon, Michael; Prat, Andrea

    2014-01-01

    How does transparency, a key feature of central bank design, affect the deliberation of monetary policymakers? We exploit a natural experiment in the Federal Open Market Committee in 1993 together with computational linguistic models (particularly Latent Dirichlet Allocation) to measure the effect of increased transparency on debate. Commentators have hypothesized both a beneficial discipline effect and a detrimental conformity effect. A difference-in-differences approach inspired by the care...

  12. Computational Fluid Dynamic Approach for Biological System Modeling

    Huang, Weidong; Wu, Chundu; Xiao, Bingjia; Xia, Weidong

    2005-01-01

    Various biological system models have been proposed in systems biology, which are based on the complex biological reactions kinetic of various components. These models are not practical because we lack of kinetic information. In this paper, it is found that the enzymatic reaction and multi-order reaction rate is often controlled by the transport of the reactants in biological systems. A Computational Fluid Dynamic (CFD) approach, which is based on transport of the components and kinetics of b...

  13. Computer Mechatronics: A Radical Approach to Mechatronics Education

    Nilsson, Martin

    2005-01-01

    This paper describes some distinguishing features of a course on mechatronics, based on computer science. We propose a teaching approach called Controlled Problem-Based Learning (CPBL). We have applied this method on three generations (2003-2005) of mainly fourth-year undergraduate students at Lund University (LTH). Although students found the course difficult, there were no dropouts, and all students attended the examination 2005.

  14. WSRC approach to validation of criticality safety computer codes

    Finch, D.R.; Mincey, J.F.

    1991-12-31

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K{sub eff}) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope {sup 236}U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed.

  15. WSRC approach to validation of criticality safety computer codes

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K{sub eff}) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope {sup 236}U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed.

  16. Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact

    Abadjiev, Valentin; Kawasaki, Haruhisa

    2014-09-01

    The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters. The study is dedicated to accepted methodology in the creation of soft- ware for the synthesis of a class high reduction hyperboloid gears - Spiroid and Helicon ones (Spiroid and Helicon are trademarks registered by the Illinois Tool Works, Chicago, Ill). The developed basic computer products belong to software, based on original mathematical models. They are based on the two mathematical models for the synthesis: "upon a pitch contact point" and "upon a mesh region". Computer programs are worked out on the basis of the described mathematical models, and the relations between them are shown. The application of the shown approaches to the synthesis of commented gear drives is illustrated.

  17. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  18. A computational approach to developing mathematical models of polyploid meiosis.

    Rehmsmeier, Marc

    2013-04-01

    Mathematical models of meiosis that relate offspring to parental genotypes through parameters such as meiotic recombination frequency have been difficult to develop for polyploids. Existing models have limitations with respect to their analytic potential, their compatibility with insights into mechanistic aspects of meiosis, and their treatment of model parameters in terms of parameter dependencies. In this article I put forward a computational approach to the probabilistic modeling of meiosis. A computer program enumerates all possible paths through the phases of replication, pairing, recombination, and segregation, while keeping track of the probabilities of the paths according to the various parameters involved. Probabilities for classes of genotypes or phenotypes are added, and the resulting formulas are simplified by the symbolic-computation system Mathematica. An example application to autotetraploids results in a model that remedies the limitations of previous models mentioned above. In addition to the immediate implications, the computational approach presented here can be expected to be useful through opening avenues for modeling a host of processes, including meiosis in higher-order ploidies. PMID:23335332

  19. WSRC approach to validation of criticality safety computer codes

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (Keff) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed

  20. Reservoir Computing approach to Great Lakes water level forecasting

    Coulibaly, Paulin

    2010-02-01

    SummaryThe use of echo state network (ESN) for dynamical system modeling is known as Reservoir Computing and has been shown to be effective for a number of applications, including signal processing, learning grammatical structure, time series prediction and motor/system control. However, the performance of Reservoir Computing approach on hydrological time series remains largely unexplored. This study investigates the potential of ESN or Reservoir Computing for long-term prediction of lake water levels. Great Lakes water levels from 1918 to 2005 are used to develop and evaluate the ESN models. The forecast performance of the ESN-based models is compared with the results obtained from two benchmark models, the conventional recurrent neural network (RNN) and the Bayesian neural network (BNN). The test results indicate a strong ability of ESN models to provide improved lake level forecasts up to 10-month ahead - suggesting that the inherent structure and innovative learning approach of the ESN is suitable for hydrological time series modeling. Another particular advantage of ESN learning approach is that it simplifies the network training complexity and avoids the limitations inherent to the gradient descent optimization method. Overall, it is shown that the ESN can be a good alternative method for improved lake level forecasting, performing better than both the RNN and the BNN on the four selected Great Lakes time series, namely, the Lakes Erie, Huron-Michigan, Ontario, and Superior.

  1. A computationally efficient and accurate numerical representation of thermodynamic properties of steam condensing steam flow in steam turbines

    Hrubý, Jan

    Vol. 1. Liberec: TU Liberec, 2011 - (Vít, T.; Dančová, P.; Novotný, P.), s. 177-186 ISBN 978-80-7372-784-0. [Experimental Fluid Mechanics 2011. Jičín (CZ), 22.11.2011-25.11.2011] R&D Projects: GA AV ČR IAA200760905; GA ČR(CZ) GAP101/11/1593; GA MŠk LA09011 Institutional research plan: CEZ:AV0Z20760514 Keywords : steam turbine * equation of state * computational fluid dynamics Subject RIV: BJ - Thermodynamics

  2. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations.

    Harb, Moussab

    2015-10-14

    Using accurate first-principles quantum calculations based on DFT (including the DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we can predict the essential fundamental properties (such as bandgap, optical absorption co-efficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit a relatively high absorption efficiency in the visible range, high dielectric constant, high charge carrier mobility and much lower exciton binding energy than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties were found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices such as Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications. PMID:26351755

  3. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations

    Harb, Moussab

    2015-08-26

    Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.

  4. CAFE: A Computer Tool for Accurate Simulation of the Regulatory Pool Fire Environment for Type B Packages

    The Container Analysis Fire Environment computer code (CAFE) is intended to provide Type B package designers with an enhanced engulfing fire boundary condition when combined with the PATRAN/P-Thermal commercial code. Historically an engulfing fire boundary condition has been modeled as σT4 where σ is the Stefan-Boltzman constant, and T is the fire temperature. The CAFE code includes the necessary chemistry, thermal radiation, and fluid mechanics to model an engulfing fire. Effects included are the local cooling of gases that form a protective boundary layer that reduces the incoming radiant heat flux to values lower than expected from a simple σT4 model. In addition, the effect of object shape on mixing that may increase the local fire temperature is included. Both high and low temperature regions that depend upon the local availability of oxygen are also calculated. Thus the competing effects that can both increase and decrease the local values of radiant heat flux are included in a reamer that is not predictable a-priori. The CAFE package consists of a group of computer subroutines that can be linked to workstation-based thermal analysis codes in order to predict package performance during regulatory and other accident fire scenarios

  5. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  6. A simple grand canonical approach to compute the vapor pressure of bulk and finite size systems

    Factorovich, Matías H.; Scherlis, Damián A. [Departamento de Química Inorgánica, Analítica y Química Física/INQUIMAE, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pab. II, Buenos Aires C1428EHA (Argentina); Molinero, Valeria [Department of Chemistry, University of Utah, 315 South 1400 East, Salt Lake City, Utah 84112-0850 (United States)

    2014-02-14

    In this article we introduce a simple grand canonical screening (GCS) approach to accurately compute vapor pressures from molecular dynamics or Monte Carlo simulations. This procedure entails a screening of chemical potentials using a conventional grand canonical scheme, and therefore it is straightforward to implement for any kind of interface. The scheme is validated against data obtained from Gibbs ensemble simulations for water and argon. Then, it is applied to obtain the vapor pressure of the coarse-grained mW water model, and it is shown that the computed value is in excellent accord with the one formally deduced using statistical thermodynamics arguments. Finally, this methodology is used to calculate the vapor pressure of a water nanodroplet of 94 molecules. Interestingly, the result is in perfect agreement with the one predicted by the Kelvin equation for a homogeneous droplet of that size.

  7. Use of CMEIAS Image Analysis Software to Accurately Compute Attributes of Cell Size, Morphology, Spatial Aggregation and Color Segmentation that Signify in Situ Ecophysiological Adaptations in Microbial Biofilm Communities

    Frank B. Dazzo

    2015-03-01

    Full Text Available In this review, we describe computational features of computer-assisted microscopy that are unique to the Center for Microbial Ecology Image Analysis System (CMEIAS software, and examples illustrating how they can be used to gain ecophysiological insights into microbial adaptations occurring at micrometer spatial scales directly relevant to individual cells occupying their ecological niches in situ. These features include algorithms that accurately measure (1 microbial cell length relevant to avoidance of protozoan bacteriovory; (2 microbial biovolume body mass relevant to allometric scaling and local apportionment of growth-supporting nutrient resources; (3 pattern recognition rules for morphotype classification of diverse microbial communities relevant to their enhanced fitness for success in the particular habitat; (4 spatial patterns of coaggregation that reveal the local intensity of cooperative vs. competitive adaptations in colonization behavior relevant to microbial biofilm ecology; and (5 object segmentation of complex color images to differentiate target microbes reporting successful cell-cell communication. These unique computational features contribute to the CMEIAS mission of developing accurate and freely accessible tools of image bioinformatics that strengthen microscopy-based approaches for understanding microbial ecology at single-cell resolution.

  8. Computational approaches to detect allosteric pathways in transmembrane molecular machines.

    Stolzenberg, Sebastian; Michino, Mayako; LeVine, Michael V; Weinstein, Harel; Shi, Lei

    2016-07-01

    Many of the functions of transmembrane proteins involved in signal processing and transduction across the cell membrane are determined by allosteric couplings that propagate the functional effects well beyond the original site of activation. Data gathered from breakthroughs in biochemistry, crystallography, and single molecule fluorescence have established a rich basis of information for the study of molecular mechanisms in the allosteric couplings of such transmembrane proteins. The mechanistic details of these couplings, many of which have therapeutic implications, however, have only become accessible in synergy with molecular modeling and simulations. Here, we review some recent computational approaches that analyze allosteric coupling networks (ACNs) in transmembrane proteins, and in particular the recently developed Protein Interaction Analyzer (PIA) designed to study ACNs in the structural ensembles sampled by molecular dynamics simulations. The power of these computational approaches in interrogating the functional mechanisms of transmembrane proteins is illustrated with selected examples of recent experimental and computational studies pursued synergistically in the investigation of secondary active transporters and GPCRs. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. PMID:26806157

  9. SPINET: A Parallel Computing Approach to Spine Simulations

    Peter G. Kropf

    1996-01-01

    Full Text Available Research in scientitic programming enables us to realize more and more complex applications, and on the other hand, application-driven demands on computing methods and power are continuously growing. Therefore, interdisciplinary approaches become more widely used. The interdisciplinary SPINET project presented in this article applies modern scientific computing tools to biomechanical simulations: parallel computing and symbolic and modern functional programming. The target application is the human spine. Simulations of the spine help us to investigate and better understand the mechanisms of back pain and spinal injury. Two approaches have been used: the first uses the finite element method for high-performance simulations of static biomechanical models, and the second generates a simulation developmenttool for experimenting with different dynamic models. A finite element program for static analysis has been parallelized for the MUSIC machine. To solve the sparse system of linear equations, a conjugate gradient solver (iterative method and a frontal solver (direct method have been implemented. The preprocessor required for the frontal solver is written in the modern functional programming language SML, the solver itself in C, thus exploiting the characteristic advantages of both functional and imperative programming. The speedup analysis of both solvers show very satisfactory results for this irregular problem. A mixed symbolic-numeric environment for rigid body system simulations is presented. It automatically generates C code from a problem specification expressed by the Lagrange formalism using Maple.

  10. Computational approaches to parameter estimation and model selection in immunology

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  11. Computing electronic structures: A new multiconfiguration approach for excited states

    We present a new method for the computation of electronic excited states of molecular systems. This method is based upon a recent theoretical definition of multiconfiguration excited states [due to one of us, see M. Lewin, Solutions of the multiconfiguration equations in quantum chemistry, Arch. Rat. Mech. Anal. 171 (2004) 83-114]. Our algorithm, dedicated to the computation of the first excited state, always converges to a stationary state of the multiconfiguration model, which can be interpreted as an approximate excited state of the molecule. The definition of this approximate excited state is variational. An interesting feature is that it satisfies a non-linear Hylleraas-Undheim-MacDonald type principle: the energy of the approximate excited state is an upper bound to the true excited state energy of the N-body Hamiltonian. To compute the first excited state, one has to deform paths on a manifold, like this is usually done in the search for transition states between reactants and products on potential energy surfaces. We propose here a general method for the deformation of paths which could also be useful in other settings. We also compare our method to other approaches used in Quantum Chemistry and give some explanation of the unsatisfactory behaviours which are sometimes observed when using the latter. Numerical results for the special case of two-electron systems are provided: we compute the first singlet excited state potential energy surface of the H 2 molecule

  12. Computational approaches for rational design of proteins with novel functionalities

    Manish Kumar Tiwari

    2012-09-01

    Full Text Available Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes.

  13. Identifying Pathogenicity Islands in Bacterial Pathogenomics Using Computational Approaches

    Dongsheng Che

    2014-01-01

    Full Text Available High-throughput sequencing technologies have made it possible to study bacteria through analyzing their genome sequences. For instance, comparative genome sequence analyses can reveal the phenomenon such as gene loss, gene gain, or gene exchange in a genome. By analyzing pathogenic bacterial genomes, we can discover that pathogenic genomic regions in many pathogenic bacteria are horizontally transferred from other bacteria, and these regions are also known as pathogenicity islands (PAIs. PAIs have some detectable properties, such as having different genomic signatures than the rest of the host genomes, and containing mobility genes so that they can be integrated into the host genome. In this review, we will discuss various pathogenicity island-associated features and current computational approaches for the identification of PAIs. Existing pathogenicity island databases and related computational resources will also be discussed, so that researchers may find it to be useful for the studies of bacterial evolution and pathogenicity mechanisms.

  14. An Improved Computational Approach for Salient Region Detection

    Qiaorong Zhang

    2010-07-01

    Full Text Available Salient region detection in images is very useful for image processing applications like image compressing, image segmentation, object detection and recognition. In this paper, an improved approach to detect salient region is presented. The proposed method can generate a robust saliency map and extract salient regions with precise boundaries. In the proposed method, local saliency, global saliency and rarity saliency of three kinds of low-level feature contrast of intensity, color and orientation are used to compute the visual saliency. A new feature integration strategy is proposed in this paper. This method can select features and compute the weights of the features dynamically by analyzing the effect of different features on the saliency. Then a more robust saliency map is obtained. It has been tested on many images to evaluate the validity and effectiveness of the proposed method. We also compare our method with other salient region detection methods and our method outperforms other methods in detection results.

  15. Benchmarking of computer codes and approaches for modeling exposure scenarios

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  16. [Computer work and De Quervain's tenosynovitis: an evidence based approach].

    Gigante, M R; Martinotti, I; Cirla, P E

    2012-01-01

    The debate around the role of the work at personal computer as cause of De Quervain's Tenosynovitis was developed partially, without considering multidisciplinary available data. A systematic review of the literature, using an evidence-based approach, was performed. In disorders associated with the use of VDU, we must distinguish those at the upper limbs and among them those related to an overload. Experimental studies on the occurrence of De Quervain's Tenosynovitis are quite limited, as well as clinically are quite difficult to prove the professional etiology, considering the interference due to other activities of daily living or to the biological susceptibility (i.e. anatomical variability, sex, age, exercise). At present there is no evidence of any connection between De Quervain syndrome and time of use of the personal computer or keyboard, limited evidence of correlation is found with time using a mouse. No data are available regarding the use exclusively or predominantly for personal laptops or mobile "smart phone". PMID:23405595

  17. A pencil beam approach to proton computed tomography

    Rescigno, Regina, E-mail: regina.rescigno@iphc.cnrs.fr; Bopp, Cécile; Rousseau, Marc; Brasse, David [Université de Strasbourg, IPHC, 23 rue du Loess, Strasbourg 67037, France and CNRS, UMR7178, Strasbourg 67037 (France)

    2015-11-15

    Purpose: A new approach to proton computed tomography (pCT) is presented. In this approach, protons are not tracked one-by-one but a beam of particles is considered instead. The elements of the pCT reconstruction problem (residual energy and path) are redefined on the basis of this new approach. An analytical image reconstruction algorithm applicable to this scenario is also proposed. Methods: The pencil beam (PB) and its propagation in matter were modeled by making use of the generalization of the Fermi–Eyges theory to account for multiple Coulomb scattering (MCS). This model was integrated into the pCT reconstruction problem, allowing the definition of the mean beam path concept similar to the most likely path (MLP) used in the single-particle approach. A numerical validation of the model was performed. The algorithm of filtered backprojection along MLPs was adapted to the beam-by-beam approach. The acquisition of a perfect proton scan was simulated and the data were used to reconstruct images of the relative stopping power of the phantom with the single-proton and beam-by-beam approaches. The resulting images were compared in a qualitative way. Results: The parameters of the modeled PB (mean and spread) were compared to Monte Carlo results in order to validate the model. For a water target, good agreement was found for the mean value of the distributions. As far as the spread is concerned, depth-dependent discrepancies as large as 2%–3% were found. For a heterogeneous phantom, discrepancies in the distribution spread ranged from 6% to 8%. The image reconstructed with the beam-by-beam approach showed a high level of noise compared to the one reconstructed with the classical approach. Conclusions: The PB approach to proton imaging may allow technical challenges imposed by the current proton-by-proton method to be overcome. In this framework, an analytical algorithm is proposed. Further work will involve a detailed study of the performances and limitations of

  18. A pencil beam approach to proton computed tomography

    Purpose: A new approach to proton computed tomography (pCT) is presented. In this approach, protons are not tracked one-by-one but a beam of particles is considered instead. The elements of the pCT reconstruction problem (residual energy and path) are redefined on the basis of this new approach. An analytical image reconstruction algorithm applicable to this scenario is also proposed. Methods: The pencil beam (PB) and its propagation in matter were modeled by making use of the generalization of the Fermi–Eyges theory to account for multiple Coulomb scattering (MCS). This model was integrated into the pCT reconstruction problem, allowing the definition of the mean beam path concept similar to the most likely path (MLP) used in the single-particle approach. A numerical validation of the model was performed. The algorithm of filtered backprojection along MLPs was adapted to the beam-by-beam approach. The acquisition of a perfect proton scan was simulated and the data were used to reconstruct images of the relative stopping power of the phantom with the single-proton and beam-by-beam approaches. The resulting images were compared in a qualitative way. Results: The parameters of the modeled PB (mean and spread) were compared to Monte Carlo results in order to validate the model. For a water target, good agreement was found for the mean value of the distributions. As far as the spread is concerned, depth-dependent discrepancies as large as 2%–3% were found. For a heterogeneous phantom, discrepancies in the distribution spread ranged from 6% to 8%. The image reconstructed with the beam-by-beam approach showed a high level of noise compared to the one reconstructed with the classical approach. Conclusions: The PB approach to proton imaging may allow technical challenges imposed by the current proton-by-proton method to be overcome. In this framework, an analytical algorithm is proposed. Further work will involve a detailed study of the performances and limitations of

  19. Monitoring Neuromotricity On-line: a Cloud Computing Approach

    Lefebvre, Olivier; Riba, Pau; Gagnon-Marchand, Jules; Fournier, Charles; Fornes, Alicia; Llados, Josep; Plamondon, Réjean

    2015-01-01

    The goal of our experiment is to develop a useful and accessible tool that can be used to evaluate a patient's health by analyzing handwritten strokes. We use a cloud computing approach to analyze stroke data sampled on a commercial tablet working on the Android platform and a distant server to perform complex calculations using the Delta and Sigma lognormal algorithms. A Google Drive account is used to store the data and to ease the development of the project. The communication between the t...

  20. A computational approach to mechanistic and predictive toxicology of pesticides

    Kongsbak, Kristine Grønning; Vinggaard, Anne Marie; Hadrup, Niels;

    2014-01-01

    Emerging challenges of managing and interpreting large amounts of complex biological data have given rise to the growing field of computational biology. We investigated the applicability of an integrated systems toxicology approach on five selected pesticides to get an overview of their modes...... protein interactome. Then, we explored modes of action of the chemicals, by integrating protein-disease information to the resulting protein networks. The dominating human adverse effects affected were reproductive disorders followed by adrenal diseases. Our results indicated that prochloraz, tebuconazole...

  1. A New Computational Scheme for Computing Greeks by the Asymptotic Expansion Approach

    Matsuoka, Ryosuke; Takahashi, Akihiko; Uchida, Yoshihiko

    2005-01-01

    We developed a new scheme for computing "Greeks"of derivatives by an asymptotic expansion approach. In particular, we derived analytical approximation formulae for deltas and Vegas of plain vanilla and av-erage European call options under general Markovian processes of underlying asset prices. Moreover, we introduced a new variance reduction method of Monte Carlo simulations based on the asymptotic expansion scheme. Finally, several numerical examples under CEV processes con?rmed the validity...

  2. A Dynamic Bayesian Approach to Computational Laban Shape Quality Analysis

    Dilip Swaminathan

    2009-01-01

    kinesiology. LMA (especially Effort/Shape emphasizes how internal feelings and intentions govern the patterning of movement throughout the whole body. As we argue, a complex understanding of intention via LMA is necessary for human-computer interaction to become embodied in ways that resemble interaction in the physical world. We thus introduce a novel, flexible Bayesian fusion approach for identifying LMA Shape qualities from raw motion capture data in real time. The method uses a dynamic Bayesian network (DBN to fuse movement features across the body and across time and as we discuss can be readily adapted for low-cost video. It has delivered excellent performance in preliminary studies comprising improvisatory movements. Our approach has been incorporated in Response, a mixed-reality environment where users interact via natural, full-body human movement and enhance their bodily-kinesthetic awareness through immersive sound and light feedback, with applications to kinesiology training, Parkinson's patient rehabilitation, interactive dance, and many other areas.

  3. Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations

    Junaid Ali Khan; Muhammad Asif Zahoor Raja; Ijaz Mansoor Qureshi

    2011-01-01

    @@ We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs).The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error.The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique.The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations.We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods.The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy.With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.%We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.

  4. An approach to estimating and extrapolating model error based on inverse problem methods: towards accurate numerical weather prediction

    Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP. (geophysics, astronomy, and astrophysics)

  5. What is legally defensible and scientifically most accurate approach for estimating the intake of radionuclides in workers?

    Merits and demerits of each technique utilized for determining the intakes of radioactive materials in workers is described with particular emphasis on the intake of thorium, uranium, and plutonium. Air monitoring at work places have certain flaws, which may give erroneous estimates of intake of the radionuclides. Bioassay techniques involve radiochemical determinations of radionuclides in biological samples such as urine, feces etc, and employing biokinetic models to estimate the intake from such measurements. Though, highly sensitive and accurate procedures are available for the determination of these radionuclides biokinetic models employed produce large errors in the estimate. In vivo measurements have fundamental problems of poor sensitivities. Also, due to non-availability of such facilities at most of the nuclear sites transporting workers at different facilities may cost a lot of financial resources. It seems difficult to defend in the court of law that determination of intake of radioactive material in workers by an individual procedure is accurate; at the best these techniques may be employed to obtain only an estimate of intake. (author)

  6. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  7. Computer-Aided Design Data Extraction Approach to Identify Product Information

    Mohamad F.A. Jabal

    2009-01-01

    Full Text Available Problem statement: Many approaches have been proposed in previous such as AUTOFEAT algorithm, feature recognition, Intelligent Feature Recognition Methodology (IFRM, a part recognition algorithm and graph theory-based approach in order to solve the integration issue between CAD and CAM. However, there is no direct connection from CAD database and machine database. Therefore, comparison among the approaches has been conducted because to recognize the suitable approach is the importance tasks before this research can be proceed for the next stage. Approach: This study focused on CAD data extraction approach to identify product information. CAD data referred as Computer-Aided Design (CAD data extracted from the CAD drawing which contained the drawing of product that be produce by manufacturing. CAD data consisted of geometric and non-geometric data. Geometric data contained lines, curves and vertex. While non-geometric data include texts, colors and layers. The extracted CAD data were needed to generate the product information which is useful information for the machine in production field to produce the product for the manufacturing same as depicted in the CAD drawing. Basically, the product information consisted of product details such as length, thickness, wideness and radius of the product, processes information for the machine to process the product such as taper, cutting, drilling and punching. In addition, product information also contained type of materials for the product. Results: As a result, feature recognition is the most suitable approach can be applied for this research. Thus, the approach was selected to precede the next stage. Conclusion: Conclusion from the comparison among the approaches is in term of accuracy of extracted data is not accurate when the drawing is incomplete drawing or contains the noise such as unwanted lines or any shapes cross the object in the drawing.

  8. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  9. Computational Approaches for Microalgal Biofuel Optimization: A Review

    Joseph Koussa

    2014-01-01

    Full Text Available The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  10. A Novel Approach of Load Balancing in Cloud Computing using Computational Intelligence

    Shabnam Sharma

    2016-02-01

    Full Text Available Nature Inspired Meta-Heuristic algorithms are proved to be beneficial for solving real world combinatorial problems such as minimum spanning tree, knapsack problem, process planning problems, load balancing and many more. In this research work, existing meta-heuristic approaches are discussed. Due to astonishing feature of echolocation, bat algorithm has drawn major attention in recent years and is applicable in different applications such vehicle routing optimization, time-tabling in railway optimization problems, load balancing in cloud computing etc. Later, the biological behaviour of bats is explored and various areas of further research are discussed. Finally, the main objective of the research paper is to propose an algorithm for one of the most important application, which is load balancing in cloud computing environment.

  11. The advantage of the three dimensional computed tomographic (3 D-CT for ensuring accurate bone incision in sagittal split ramus osteotomy

    Coen Pramono D

    2005-03-01

    Full Text Available Functional and aesthetic dysgnathia surgery requires accurate pre-surgical planning, including the surgical technique to be used related with the difference of anatomical structures amongst individuals. Programs that simulate the surgery become increasingly important. This can be mediated by using a surgical model, conventional x-rays as panoramic, cephalometric projections and another sophisticated method such as a three dimensional computed tomography (3 D-CT. A patient who had undergone double jaw surgeries with difficult anatomical landmarks was presented. In this case the mandible foramens were seen highly relatively related to the sigmoid notches. Therefore, ensuring the bone incisions in sagittal split was presumed to be difficult. A 3D-CT was made and considered to be very helpful in supporting the pre-operative diagnostic.

  12. Towards a Resource Reservation Approach for an Opportunistic Computing Environment

    Advanced reservation has been used in grid environments to provide quality of service (QoS) and to guarantee resources available at the execution time. However, in grid subtypes, such as opportunistic grid computing, it is a challenge provides QoS and guarantee of availability resources. In this article, we propose a new advanced reservation approach which offers to a user the possibility to select resources in advance for a future utilization. Therefore, the main goal of this proposal is to offer a best effort feature to a user from an opportunistic configuration. In these types of environments, it is not possible to provide QoS, because, usually, there are no guarantees of resources availability and, consequently, the execution of users applications. In addition, this research work provides a way to organize executions, what it can improve the scheduling and system operations. Experimental results, carried out through a case study, shown the efficiency and relevance of our proposal

  13. Leaching from Heterogeneous Heck Catalysts: A Computational Approach

    2002-01-01

    The possibility of carrying out a purely heterogeneous Heck reaction in practice without Pd leaching has been previously considered by a number of research groups but no general consent has yet arrived. Here, the reaction was, for the first time, evaluated by a simple computational approach. Modelling experiments were performed on one of the initial catalytic steps: phenyl halides attachment on Pd (111) to (100) and (111) to (111) ridges of a Pd crystal. Three surface structures of resulting [PhPdX] were identified as possible reactive intermediates. Following potential energy minimisation calculations based on a universal force field, the relative stabilities of these surface species were then determined. Results showed the most stable species to be one in which a Pd ridge atom is removed from the Pd crystal structure, suggesting Pd leaching induced by phenyl halides is energetically favourable.

  14. Computer Modeling of Violent Intent: A Content Analysis Approach

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  15. Systems approaches to computational modeling of the oral microbiome

    Dimiter V. Dimitrov

    2013-07-01

    Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders

  16. Computer Aided Interpretation Approach for Optical Tomographic Images

    Klose, Christian D; Netz, Uwe; Beuthan, Juergen; Hielscher, Andreas H

    2010-01-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) of human finger joints in optical tomographic images. The image interpretation method employs a multi-variate signal detection analysis aided by a machine learning classification algorithm, called Self-Organizing Mapping (SOM). Unlike in previous studies, this allows for combining multiple physical image parameters, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging and inspection of optical tomographic images), were used as "ground truth"-benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities while...

  17. A Computational Drug Repositioning Approach for Targeting Oncogenic Transcription Factors.

    Gayvert, Kaitlyn M; Dardenne, Etienne; Cheung, Cynthia; Boland, Mary Regina; Lorberbaum, Tal; Wanjala, Jackline; Chen, Yu; Rubin, Mark A; Tatonetti, Nicholas P; Rickman, David S; Elemento, Olivier

    2016-06-14

    Mutations in transcription factor (TF) genes are frequently observed in tumors, often leading to aberrant transcriptional activity. Unfortunately, TFs are often considered undruggable due to the absence of targetable enzymatic activity. To address this problem, we developed CRAFTT, a computational drug-repositioning approach for targeting TF activity. CRAFTT combines ChIP-seq with drug-induced expression profiling to identify small molecules that can specifically perturb TF activity. Application to ENCODE ChIP-seq datasets revealed known drug-TF interactions, and a global drug-protein network analysis supported these predictions. Application of CRAFTT to ERG, a pro-invasive, frequently overexpressed oncogenic TF, predicted that dexamethasone would inhibit ERG activity. Dexamethasone significantly decreased cell invasion and migration in an ERG-dependent manner. Furthermore, analysis of electronic medical record data indicates a protective role for dexamethasone against prostate cancer. Altogether, our method provides a broadly applicable strategy for identifying drugs that specifically modulate TF activity. PMID:27264179

  18. Computational Diagnostic: A Novel Approach to View Medical Data.

    Mane, K. K. (Ketan Kirtiraj); Börner, K. (Katy)

    2007-01-01

    A transition from traditional paper-based medical records to electronic health record is largely underway. The use of electronic records offers tremendous potential to personalize patient diagnosis and treatment. In this paper, we discuss a computational diagnostic tool that uses digital medical records to help doctors gain better insight about a patient's medical condition. The paper details different interactive features of the tool which offer potential to practice evidence-based medicine and advance patient diagnosis practices. The healthcare industry is a constantly evolving domain. Research from this domain is often translated into better understanding of different medical conditions. This new knowledge often contributes towards improved diagnosis and treatment solutions for patients. But the healthcare industry lags behind to seek immediate benefits of the new knowledge as it still adheres to the traditional paper-based approach to keep track of medical records. However recently we notice a drive that promotes a transition towards electronic health record (EHR). An EHR stores patient medical records in digital format and offers potential to replace the paper health records. Earlier attempts of an EHR replicated the paper layout on the screen, representation of medical history of a patient in a graphical time-series format, interactive visualization with 2D/3D generated images from an imaging device. But an EHR can be much more than just an 'electronic view' of the paper record or a collection of images from an imaging device. In this paper, we present an EHR called 'Computational Diagnostic Tool', that provides a novel computational approach to look at patient medical data. The developed EHR system is knowledge driven and acts as clinical decision support tool. The EHR tool provides two visual views of the medical data. Dynamic interaction with data is supported to help doctors practice evidence-based decisions and make judicious

  19. A computational approach for identifying pathogenicity islands in prokaryotic genomes

    Oh Tae Kwang

    2005-07-01

    Full Text Available Abstract Background Pathogenicity islands (PAIs, distinct genomic segments of pathogens encoding virulence factors, represent a subgroup of genomic islands (GIs that have been acquired by horizontal gene transfer event. Up to now, computational approaches for identifying PAIs have been focused on the detection of genomic regions which only differ from the rest of the genome in their base composition and codon usage. These approaches often lead to the identification of genomic islands, rather than PAIs. Results We present a computational method for detecting potential PAIs in complete prokaryotic genomes by combining sequence similarities and abnormalities in genomic composition. We first collected 207 GenBank accessions containing either part or all of the reported PAI loci. In sequenced genomes, strips of PAI-homologs were defined based on the proximity of the homologs of genes in the same PAI accession. An algorithm reminiscent of sequence-assembly procedure was then devised to merge overlapping or adjacent genomic strips into a large genomic region. Among the defined genomic regions, PAI-like regions were identified by the presence of homolog(s of virulence genes. Also, GIs were postulated by calculating G+C content anomalies and codon usage bias. Of 148 prokaryotic genomes examined, 23 pathogenic and 6 non-pathogenic bacteria contained 77 candidate PAIs that partly or entirely overlap GIs. Conclusion Supporting the validity of our method, included in the list of candidate PAIs were thirty four PAIs previously identified from genome sequencing papers. Furthermore, in some instances, our method was able to detect entire PAIs for those only partial sequences are available. Our method was proven to be an efficient method for demarcating the potential PAIs in our study. Also, the function(s and origin(s of a candidate PAI can be inferred by investigating the PAI queries comprising it. Identification and analysis of potential PAIs in prokaryotic

  20. A computational approach for deciphering the organization of glycosaminoglycans.

    Jean L Spencer

    Full Text Available BACKGROUND: Increasing evidence has revealed important roles for complex glycans as mediators of normal and pathological processes. Glycosaminoglycans are a class of glycans that bind and regulate the function of a wide array of proteins at the cell-extracellular matrix interface. The specific sequence and chemical organization of these polymers likely define function; however, identification of the structure-function relationships of glycosaminoglycans has been met with challenges associated with the unique level of complexity and the nontemplate-driven biosynthesis of these biopolymers. METHODOLOGY/PRINCIPAL FINDINGS: To address these challenges, we have devised a computational approach to predict fine structure and patterns of domain organization of the specific glycosaminoglycan, heparan sulfate (HS. Using chemical composition data obtained after complete and partial digestion of mixtures of HS chains with specific degradative enzymes, the computational analysis produces populations of theoretical HS chains with structures that meet both biosynthesis and enzyme degradation rules. The model performs these operations through a modular format consisting of input/output sections and three routines called chainmaker, chainbreaker, and chainsorter. We applied this methodology to analyze HS preparations isolated from pulmonary fibroblasts and epithelial cells. Significant differences in the general organization of these two HS preparations were observed, with HS from epithelial cells having a greater frequency of highly sulfated domains. Epithelial HS also showed a higher density of specific HS domains that have been associated with inhibition of neutrophil elastase. Experimental analysis of elastase inhibition was consistent with the model predictions and demonstrated that HS from epithelial cells had greater inhibitory activity than HS from fibroblasts. CONCLUSIONS/SIGNIFICANCE: This model establishes the conceptual framework for a new class of

  1. A NEW APPROACH TOWARDS INTEGRATED CLOUD COMPUTING ARCHITECTURE

    Niloofar Khanghahi

    2014-03-01

    Full Text Available Today across various businesses, administrative and senior managers seeking for new technologies and approaches in which they can utilize it, more easy and affordable and thereby rise up their competitive profit and utility. Information Communications and Technology (ICT is no exception from this principle. Cloud computing concept and technology and its inherent advantages has created a new ecosystem in the world of computing and is driving ICT industry one step forward. This technology can play an important role in an organization’s durability and IT strategies. Nowadays, due to progress and global popularity of cloud environments, many organizations moving to cloud and some well-known IT solution providers such as IBM and Oracle have introduced specific architecture to be deployed for cloud environment. On the other hand, using of IT Frameworks can be the best way for integrated business processes and other different processes. The purpose of this paper is to provide a novel architecture for cloud environment, based on recent best practices and frameworks and other cloud reference architecture. Meanwhile, a new service model has been introduced in this proposed architecture. This architecture is finally compared with little other architecture in a form of statistical graphs to show its benefits.

  2. Cloud computing approaches for prediction of ligand binding poses and pathways.

    Lawrenz, Morgan; Shukla, Diwakar; Pande, Vijay S

    2015-01-01

    We describe an innovative protocol for ab initio prediction of ligand crystallographic binding poses and highly effective analysis of large datasets generated for protein-ligand dynamics. We include a procedure for setup and performance of distributed molecular dynamics simulations on cloud computing architectures, a model for efficient analysis of simulation data, and a metric for evaluation of model convergence. We give accurate binding pose predictions for five ligands ranging in affinity from 7 nM to > 200 μM for the immunophilin protein FKBP12, for expedited results in cases where experimental structures are difficult to produce. Our approach goes beyond single, low energy ligand poses to give quantitative kinetic information that can inform protein engineering and ligand design. PMID:25608737

  3. A computational approach for prediction of donor splice sites with improved accuracy.

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R; Wahi, S D

    2016-09-01

    Identification of splice sites is important due to their key role in predicting the exon-intron structure of protein coding genes. Though several approaches have been developed for the prediction of splice sites, further improvement in the prediction accuracy will help predict gene structure more accurately. This paper presents a computational approach for prediction of donor splice sites with higher accuracy. In this approach, true and false splice sites were first encoded into numeric vectors and then used as input in artificial neural network (ANN), support vector machine (SVM) and random forest (RF) for prediction. ANN and SVM were found to perform equally and better than RF, while tested on HS3D and NN269 datasets. Further, the performance of ANN, SVM and RF were analyzed by using an independent test set of 50 genes and found that the prediction accuracy of ANN was higher than that of SVM and RF. All the predictors achieved higher accuracy while compared with the existing methods like NNsplice, MEM, MDD, WMM, MM1, FSPLICE, GeneID and ASSP, using the independent test set. We have also developed an online prediction server (PreDOSS) available at http://cabgrid.res.in:8080/predoss, for prediction of donor splice sites using the proposed approach. PMID:27302911

  4. High Performance Computation Through Slicing and Value Replacement with CCDD Approach

    Deepti Tak,

    2013-02-01

    Full Text Available In software development and maintenance stages, programmers need to frequently debug the software. Software fault localization is one of the most exclusive, tedious and time intense activities in program debugging. A common approach to fix software error is computing suspiciousness of program elements according to failed test executions and passed test executions. However, this technique does not give full consideration to dependences between program elements and therefore it reduce the ability for efficient fault localization. Developers must identify statements involved in failures and select suspicious statements that may contain faults. Our paper presents a new technique that identify statements involved in failure –those executed by failed test cases through narrowing the search domain using Slicing Technique (Control and Data dependence slice by slicing the program and making it more effective with the CCDD (Coupling Control and Data Dependency approach in Value Replacement. The proposed approach is more efficient and is more accurate in locating statements that directlyindirectly effect the faulty statements. This approach can also be applied to many other research areas.

  5. Computer-Aided Approaches for Targeting HIVgp41

    William J. Allen

    2012-08-01

    Full Text Available Virus-cell fusion is the primary means by which the human immunodeficiency virus-1 (HIV delivers its genetic material into the human T-cell host. Fusion is mediated in large part by the viral glycoprotein 41 (gp41 which advances through four distinct conformational states: (i native, (ii pre-hairpin intermediate, (iii fusion active (fusogenic, and (iv post-fusion. The pre-hairpin intermediate is a particularly attractive step for therapeutic intervention given that gp41 N-terminal heptad repeat (NHR and C‑terminal heptad repeat (CHR domains are transiently exposed prior to the formation of a six-helix bundle required for fusion. Most peptide-based inhibitors, including the FDA‑approved drug T20, target the intermediate and there are significant efforts to develop small molecule alternatives. Here, we review current approaches to studying interactions of inhibitors with gp41 with an emphasis on atomic-level computer modeling methods including molecular dynamics, free energy analysis, and docking. Atomistic modeling yields a unique level of structural and energetic detail, complementary to experimental approaches, which will be important for the design of improved next generation anti-HIV drugs.

  6. Computational approaches to protein inference in shotgun proteomics.

    Li, Yong Fuga; Radivojac, Predrag

    2012-01-01

    Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1) assigning experimental tandem mass spectra to peptides derived from a protein database, and (2) mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programming and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area. PMID:23176300

  7. Computational approaches to protein inference in shotgun proteomics

    Li Yong

    2012-11-01

    Full Text Available Abstract Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1 assigning experimental tandem mass spectra to peptides derived from a protein database, and (2 mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programing and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area.

  8. A computational approach to finding novel targets for existing drugs.

    Yvonne Y Li

    2011-09-01

    Full Text Available Repositioning existing drugs for new therapeutic uses is an efficient approach to drug discovery. We have developed a computational drug repositioning pipeline to perform large-scale molecular docking of small molecule drugs against protein drug targets, in order to map the drug-target interaction space and find novel interactions. Our method emphasizes removing false positive interaction predictions using criteria from known interaction docking, consensus scoring, and specificity. In all, our database contains 252 human protein drug targets that we classify as reliable-for-docking as well as 4621 approved and experimental small molecule drugs from DrugBank. These were cross-docked, then filtered through stringent scoring criteria to select top drug-target interactions. In particular, we used MAPK14 and the kinase inhibitor BIM-8 as examples where our stringent thresholds enriched the predicted drug-target interactions with known interactions up to 20 times compared to standard score thresholds. We validated nilotinib as a potent MAPK14 inhibitor in vitro (IC50 40 nM, suggesting a potential use for this drug in treating inflammatory diseases. The published literature indicated experimental evidence for 31 of the top predicted interactions, highlighting the promising nature of our approach. Novel interactions discovered may lead to the drug being repositioned as a therapeutic treatment for its off-target's associated disease, added insight into the drug's mechanism of action, and added insight into the drug's side effects.

  9. Efficient and accurate modeling of multi-wavelength propagation in SOAs: a generalized coupled-mode approach

    Antonelli, Cristian; Li, Wangzhe; Coldren, Larry

    2015-01-01

    We present a model for multi-wavelength mixing in semiconductor optical amplifiers (SOAs) based on coupled-mode equations. The proposed model applies to all kinds of SOA structures, takes into account the longitudinal dependence of carrier density caused by saturation, it accommodates an arbitrary functional dependencies of the material gain and carrier recombination rate on the local value of carrier density, and is computationally more efficient by orders of magnitude as compared with the standard full model based on space-time equations. We apply the coupled-mode equations model to a recently demonstrated phase-sensitive amplifier based on an integrated SOA and prove its results to be consistent with the experimental data. The accuracy of the proposed model is certified by means of a meticulous comparison with the results obtained by integrating the space-time equations.

  10. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  11. An evolutionary computation approach to examine functional brain plasticity

    Arnab eRoy

    2016-04-01

    Full Text Available One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN and the executive control network (ECN during recovery from traumatic brain injury (TBI; the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in

  12. Fast and Accurate Data Extraction for Near Real-Time Registration of 3-D Ultrasound and Computed Tomography in Orthopedic Surgery.

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2015-12-01

    Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery. PMID:26365924

  13. Approaches for computing uncertainties in predictions of complex-codes

    Uncertainty analysis aims at characterizing the errors associated with experiments and predictions of computer codes, in contradistinction with sensitivity analysis, which aims at determining the rate of change (i.e., derivative) in the predictions of codes when one or more (typically uncertain) input parameters varies within its range of interest. This paper reviews the salient features of three independent approaches for estimating uncertainties associated with predictions of complex system codes. The first approach reviewed in this paper as the prototype for propagation of code input errors is the so-called 'GRS method', which includes the so-called 'CSAU method' (Code Scaling, Applicability and Uncertainty) and the majority of methods adopted by the nuclear industry. Although the entire set of the actual number of input parameters for a typical NPP (Nuclear Power Plant) input deck, ranging up to about 105 input parameters, could theoretically be considered as uncertainty sources by these methods, only a 'manageable' number (of the order of several tens) is actually taken into account in practice. Ranges of variations, together with suitable PDF (Probability Density Function) are then assigned for each of the uncertain input parameter actually considered in the analysis. The number of computations using the code under investigation needed for obtaining the desired confidence in the results can be determined theoretically (it is of the order of 100). Subsequently, an additional number of computations (ca. 100) with the code are performed to propagate the uncertainties inside the code, from inputs to outputs (results). The second approach reviewed in this paper is the propagation of code output errors, as representatively illustrated by the UMAE-CIAU (Uncertainty Method based upon Accuracy Extrapolation 'embedded' into the Code with capability of Internal Assessment of Uncertainty). Note that this class of methods includes only a few applications from industry

  14. Performance Analysis of Daubechies Wavelet and Differential Pulse Code Modulation Based Multiple Neural Networks Approach for Accurate Compression of Images

    S.Sridhar

    2013-09-01

    Full Text Available Large Images in general contain huge quantity of data demanding the invention of highly efficient hybrid methods of image compression systems involving various hybrid techniques. We proposed and implemented a Daubechies wavelet transform and Differential Pulse Code Modulation (DPCM based multiple neural network hybrid model for image encoding and decoding operations combining the advantages of wavelets, neural networks and DPCM because, wavelet transforms are set of mathematical functions that established their viability in the areas of image compression owing to the computational simplicity involved in their implementation, Artificial neural networks can generalize inputs even on untrained data owing to their massive parallel architectures and Differential Pulse Code Modulation reduces redundancy based on the predicted sample values. Initially the input image is subjected to two level decomposition using Daubechies family wavelet filters generating high-scale low frequency approximation coefficients A2 and high frequency detail coefficients H2, V2, D2, H1, V1 and, D1 of multiple resolutions resembling different frequency bands. Scalar quantization and Huffman encoding schemes are used for compressing different sub bands based on their statistical properties i.e the low frequency band approximation coefficients are compressed by the DPCM while the high frequency band coefficients are compressed with neural networks. Empirical analysis and objective fidelity metrics calculation is performed and tabulated for analysis.

  15. Efficient and accurate local approximations to coupled-electron pair approaches: An attempt to revive the pair natural orbital method

    Neese, Frank; Wennmohs, Frank; Hansen, Andreas

    2009-03-01

    Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Møller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol-1. Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500

  16. Quantum dynamics of two quantum dots coupled through localized plasmons: An intuitive and accurate quantum optics approach using quasinormal modes

    Ge, Rong-Chun; Hughes, Stephen

    2015-11-01

    We study the quantum dynamics of two quantum dots (QDs) or artificial atoms coupled through the fundamental localized plasmon of a gold nanorod resonator. We derive an intuitive and efficient time-local master equation, in which the effect of the metal nanorod is taken into consideration self-consistently using a quasinormal mode (QNM) expansion technique of the photon Green function. Our efficient QNM technique offers an alternative and more powerful approach over the standard Jaynes-Cummings model, where the radiative decay, nonradiative decay, and spectral reshaping effect of the electromagnetic environment is rigorously included in a clear and transparent way. We also show how one can use our approach to compliment the approximate Jaynes-Cummings model in certain spatial regimes where it is deemed to be valid. We then present a study of the quantum dynamics and photoluminescence spectra of the two plasmon-coupled QDs. We first explore the non-Markovian regime, which is found to be important only on the ultrashort time scale of the plasmon mode which is about 40 fs. For the field free evolution case of excited QDs near the nanorod, we demonstrate how spatially separated QDs can be effectively coupled through the plasmon resonance and we show how frequencies away from the plasmon resonance can be more effective for coherently coupling the QDs. Despite the strong inherent dissipation of gold nanoresonators, we show that qubit entanglements as large as 0.7 can be achieved from an initially separate state, which has been limited to less than 0.5 in previous work for weakly coupled reservoirs. We also study the superradiance and subradiance decay dynamics of the QD pair. Finally, we investigate the rich quantum dynamics of QDs that are incoherently pumped, and study the polarization dependent behavior of the emitted photoluminescence spectrum where a double-resonance structure is observed due to the strong photon exchange interactions. Our general quantum plasmonics

  17. A task-specific approach to computational imaging system design

    Ashok, Amit

    The traditional approach to imaging system design places the sole burden of image formation on optical components. In contrast, a computational imaging system relies on a combination of optics and post-processing to produce the final image and/or output measurement. Therefore, the joint-optimization (JO) of the optical and the post-processing degrees of freedom plays a critical role in the design of computational imaging systems. The JO framework also allows us to incorporate task-specific performance measures to optimize an imaging system for a specific task. In this dissertation, we consider the design of computational imaging systems within a JO framework for two separate tasks: object reconstruction and iris-recognition. The goal of these design studies is to optimize the imaging system to overcome the performance degradations introduced by under-sampled image measurements. Within the JO framework, we engineer the optical point spread function (PSF) of the imager, representing the optical degrees of freedom, in conjunction with the post-processing algorithm parameters to maximize the task performance. For the object reconstruction task, the optimized imaging system achieves a 50% improvement in resolution and nearly 20% lower reconstruction root-mean-square-error (RMSE) as compared to the un-optimized imaging system. For the iris-recognition task, the optimized imaging system achieves a 33% improvement in false rejection ratio (FRR) for a fixed alarm ratio (FAR) relative to the conventional imaging system. The effect of the performance measures like resolution, RMSE, FRR, and FAR on the optimal design highlights the crucial role of task-specific design metrics in the JO framework. We introduce a fundamental measure of task-specific performance known as task-specific information (TSI), an information-theoretic measure that quantifies the information content of an image measurement relevant to a specific task. A variety of source-models are derived to illustrate

  18. Computational methods for reactive transport modeling: A Gibbs energy minimization approach for multiphase equilibrium calculations

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg

    2016-02-01

    We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.

  19. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection.

    Ding, Hong; Dwaraknath, Shyam S; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A

    2016-05-25

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available. PMID:27145398

  20. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection

    Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A.

    2016-05-25

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.

  1. Applying a cloud computing approach to storage architectures for spacecraft

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  2. A systematic approach to the accurate quantification of selenium in serum selenoalbumin by HPLC-ICP-MS

    Jitaru, Petru, E-mail: Petru.Jitaru@lne.fr [Laboratoire National de Metrologie et d' Essais (LNE), Department of Biomedical and Inorganic Chemistry, 1 rue Gaston Boissier, 75015 Paris (France); Goenaga-Infante, Heidi [LGC Limited, Queens Road, Teddington, TW11 OLY, Middlesex (United Kingdom); Vaslin-Reimann, Sophie; Fisicaro, Paola [Laboratoire National de Metrologie et d' Essais (LNE), Department of Biomedical and Inorganic Chemistry, 1 rue Gaston Boissier, 75015 Paris (France)

    2010-01-11

    In this paper, two different methods are for the first time systematically compared for the determination of selenium in human serum selenoalbumin (SeAlb). Firstly, SeAlb was enzymatically hydrolyzed and the resulting selenomethionine (SeMet) was quantified using species-specific isotope dilution (SSID) with reversed phase-HPLC (RP-HPLC) hyphenated to (collision/reaction cell) inductively coupled plasma-quadrupole mass spectrometry (CRC ICP-QMS). In order to assess the enzymatic hydrolysis yield, SeAlb was determined as an intact protein by affinity-HPLC (AF-HPLC) coupled to CRC ICP-QMS. Using this approach, glutathione peroxidase (GPx) and selenoprotein P (SelP) (the two selenoproteins present in serum) were also determined within the same chromatographic run. The levels of selenium associated with SeAlb in three serum materials, namely BCR-637, Seronorm level 1 and Seronorm level 2, obtained using both methods were in a good agreement. Verification of the absence of free SeMet, which interferes with the SeAlb determination (down to the amino acid level), in such materials was addressed by analyzing the fraction of GPx, partially purified by AF-HPLC, using RP-HPLC (GPx only) and size exclusion-HPLC (SE-HPLC) coupled to CRC ICP-QMS. The latter methodology was also used for the investigation of the presence of selenium species other than the selenoproteins in the (AF-HPLC) SelP and SeAlb fractions; the same selenium peaks were detected in both control and BCR-637 serum with a difference in age of ca. 12 years. It is also for the first time that the concentrations of selenium associated with SeAlb, GPx and SelP species in such commercially available serums (only certified or having indicative levels of total selenium content) are reported. Such indicative values can be used for reference purposes in future validation of speciation methods for selenium in human serum and/or inter-laboratory comparisons.

  3. A systematic approach to the accurate quantification of selenium in serum selenoalbumin by HPLC-ICP-MS

    In this paper, two different methods are for the first time systematically compared for the determination of selenium in human serum selenoalbumin (SeAlb). Firstly, SeAlb was enzymatically hydrolyzed and the resulting selenomethionine (SeMet) was quantified using species-specific isotope dilution (SSID) with reversed phase-HPLC (RP-HPLC) hyphenated to (collision/reaction cell) inductively coupled plasma-quadrupole mass spectrometry (CRC ICP-QMS). In order to assess the enzymatic hydrolysis yield, SeAlb was determined as an intact protein by affinity-HPLC (AF-HPLC) coupled to CRC ICP-QMS. Using this approach, glutathione peroxidase (GPx) and selenoprotein P (SelP) (the two selenoproteins present in serum) were also determined within the same chromatographic run. The levels of selenium associated with SeAlb in three serum materials, namely BCR-637, Seronorm level 1 and Seronorm level 2, obtained using both methods were in a good agreement. Verification of the absence of free SeMet, which interferes with the SeAlb determination (down to the amino acid level), in such materials was addressed by analyzing the fraction of GPx, partially purified by AF-HPLC, using RP-HPLC (GPx only) and size exclusion-HPLC (SE-HPLC) coupled to CRC ICP-QMS. The latter methodology was also used for the investigation of the presence of selenium species other than the selenoproteins in the (AF-HPLC) SelP and SeAlb fractions; the same selenium peaks were detected in both control and BCR-637 serum with a difference in age of ca. 12 years. It is also for the first time that the concentrations of selenium associated with SeAlb, GPx and SelP species in such commercially available serums (only certified or having indicative levels of total selenium content) are reported. Such indicative values can be used for reference purposes in future validation of speciation methods for selenium in human serum and/or inter-laboratory comparisons.

  4. A Near-Term Quantum Computing Approach for Hard Computational Problems in Space Exploration

    Smelyanskiy, Vadim N; Knysh, Sergey I; Williams, Colin P; Johnson, Mark W; Thom, Murray C; Macready, William G; Pudenz, Kristen L

    2012-01-01

    In this article, we show how to map a sampling of the hardest artificial intelligence problems in space exploration onto equivalent Ising models that then can be attacked using quantum annealing implemented in D-Wave machine. We overview the existing results as well as propose new Ising model implementations for quantum annealing. We review supervised and unsupervised learning algorithms for classification and clustering with applications to feature identification and anomaly detection. We introduce algorithms for data fusion and image matching for remote sensing applications. We overview planning problems for space exploration mission applications and algorithms for diagnostics and recovery with applications to deep space missions. We describe combinatorial optimization algorithms for task assignment in the context of autonomous unmanned exploration. Finally, we discuss the ways to circumvent the limitation of the Ising mapping using a "blackbox" approach based on ideas from probabilistic computing. In this ...

  5. A MOBILE COMPUTING TECHNOLOGY FORESIGHT STUDY WITH SCENARIO PLANNING APPROACH

    Wei-Hsiu Weng

    2015-09-01

    Full Text Available Although the importance of mobile computing is gradually being recognized, mobile computing technology development and adoption have not been clearly realized. This paper focuses on the technology planning strategy for organizations that have an interest in developing or adopting mobile computing technology. By using scenario analysis, a technology planning strategy is constructed. In this study, thirty mobile computing technologies are classified into six groups, and the importance and risk factors of these technologies are then evaluated under two possible scenarios. The main research findings include the discovery that most mobile computing software technologies are rated high to medium in importance and low risk in both scenarios, and that scenario changes will have less impact on mobile computing devices and on mobile computing software technologies. These results provide a reference for organizations interested in developing or adopting mobile computing technology.

  6. A MOBILE COMPUTING TECHNOLOGY FORESIGHT STUDY WITH SCENARIO PLANNING APPROACH

    Wei-Hsiu Weng; Woo-Tsong Lin

    2015-01-01

    Although the importance of mobile computing is gradually being recognized, mobile computing technology development and adoption have not been clearly realized. This paper focuses on the technology planning strategy for organizations that have an interest in developing or adopting mobile computing technology. By using scenario analysis, a technology planning strategy is constructed. In this study, thirty mobile computing technologies are classified into six groups, and the importance and risk ...

  7. Computational approaches to stochastic systems in physics and biology

    Jeraldo Maldonado, Patricio Rodrigo

    In this dissertation, I devise computational approaches to model and understand two very different systems which exhibit stochastic behavior: quantum fluids with topological defects arising during quenches and forcing, and complex microbial communities living and evolving with the gastrointestinal tracts of vertebrates. As such, this dissertation is organized into two parts. In Part I, I create a model for quantum fluids, which incorporates a conservative and dissipative part, and I also allow the fluid to be externally forced by a normal fluid. I use then this model to calculate scaling laws arising from the stochastic interactions of the topological defects exhibited by the modeled fluid while undergoing a quench. In Chapter 2 I give a detailed description of this model of quantum fluids. Unlike more traditional approaches, this model is based on Cell Dynamical Systems (CDS), an approach that captures relevant physical features of the system and allows for long time steps during its evolution. I devise a two step CDS model, implementing both conservative and dissipative dynamics present in quantum fluids. I also couple the model with an external normal fluid field that drives the system. I then validate the results of the model by measuring different scaling laws predicted for quantum fluids. I also propose an extension of the model that also incorporates the excitations of the fluid and couples its dynamics with the dynamics of the condensate. In Chapter 3 I use the above model to calculate scaling laws predicted for the velocity of topological defects undergoing a critical quench. To accomplish this, I numerically implement an algorithm that extracts from the order parameter field the velocity components of the defects as they move during the quench process. This algorithm is robust and extensible to any system where defects are located by the zeros of the order parameter. The algorithm is also applied to a sheared stripe-forming system, allowing the

  8. The Metacognitive Approach to Computer Education: Making Explicit the Learning Journey

    Phelps, Renata

    2007-01-01

    This paper presents a theoretical and practical exploration of a metacognitive approach to computer education, developed through a three-year action research project. It is argued that the approach contrasts significantly with often-employed directive and competency-based approaches to computer education and is more appropriate in addressing the…

  9. A new approach in development of data flow control and investigation system for computer networks

    This paper describes a new approach in development of data flow control and investigation system for computer networks. This approach was developed and applied in the Moscow Radiotechnical Institute for control and investigations of Institute computer network. It allowed us to solve our network current problems successfully. Description of our approach is represented below along with the most interesting results of our work. (author)

  10. Human Computation An Integrated Approach to Learning from the Crowd

    Law, Edith

    2011-01-01

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy

  11. Differentiating Information Skills and Computer Skills: A Factor Analytic Approach

    Pask, Judith M.; Saunders, E. Stewart

    2004-01-01

    A basic tenet of information literacy programs is that the skills needed to use computers and the skills needed to find and evaluate information are two separate sets of skills. Outside the library this is not always the view. The claim is sometimes made that information skills are acquired by learning computer skills. All that is needed is a computer lab and someone to teach computer skills. This study uses data from a survey of computer and information skills to determine whether or not...

  12. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  13. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  14. Invariant Image Watermarking Using Accurate Zernike Moments

    Ismail A. Ismail

    2010-01-01

    Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.

  15. A computational intelligence approach to the Mars Precision Landing problem

    Birge, Brian Kent, III

    Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over

  16. A technique for quantifying wrist motion using four-dimensional computed tomography: approach and validation.

    Zhao, Kristin; Breighner, Ryan; Holmes, David; Leng, Shuai; McCollough, Cynthia; An, Kai-Nan

    2015-07-01

    Accurate quantification of subtle wrist motion changes resulting from ligament injuries is crucial for diagnosis and prescription of the most effective interventions for preventing progression to osteoarthritis. Current imaging techniques are unable to detect injuries reliably and are static in nature, thereby capturing bone position information rather than motion which is indicative of ligament injury. A recently developed technique, 4D (three dimensions + time) computed tomography (CT) enables three-dimensional volume sequences to be obtained during wrist motion. The next step in successful clinical implementation of the tool is quantification and validation of imaging biomarkers obtained from the four-dimensional computed tomography (4DCT) image sequences. Measures of bone motion and joint proximities are obtained by: segmenting bone volumes in each frame of the dynamic sequence, registering their positions relative to a known static posture, and generating surface polygonal meshes from which minimum distance (proximity) measures can be quantified. Method accuracy was assessed during in vitro simulated wrist movement by comparing a fiducial bead-based determination of bone orientation to a bone-based approach. The reported errors for the 4DCT technique were: 0.00-0.68 deg in rotation; 0.02-0.30 mm in translation. Results are on the order of the reported accuracy of other image-based kinematic techniques. PMID:25901447

  17. A Soft Computing Approach to Kidney Diseases Evaluation.

    Neves, José; Martins, M Rosário; Vilhena, João; Neves, João; Gomes, Sabino; Abelha, António; Machado, José; Vicente, Henrique

    2015-10-01

    Kidney renal failure means that one's kidney have unexpectedly stopped functioning, i.e., once chronic disease is exposed, the presence or degree of kidney dysfunction and its progression must be assessed, and the underlying syndrome has to be diagnosed. Although the patient's history and physical examination may denote good practice, some key information has to be obtained from valuation of the glomerular filtration rate, and the analysis of serum biomarkers. Indeed, chronic kidney sickness depicts anomalous kidney function and/or its makeup, i.e., there is evidence that treatment may avoid or delay its progression, either by reducing and prevent the development of some associated complications, namely hypertension, obesity, diabetes mellitus, and cardiovascular complications. Acute kidney injury appears abruptly, with a rapid deterioration of the renal function, but is often reversible if it is recognized early and treated promptly. In both situations, i.e., acute kidney injury and chronic kidney disease, an early intervention can significantly improve the prognosis. The assessment of these pathologies is therefore mandatory, although it is hard to do it with traditional methodologies and existing tools for problem solving. Hence, in this work, we will focus on the development of a hybrid decision support system, in terms of its knowledge representation and reasoning procedures based on Logic Programming, that will allow one to consider incomplete, unknown, and even contradictory information, complemented with an approach to computing centered on Artificial Neural Networks, in order to weigh the Degree-of-Confidence that one has on such a happening. The present study involved 558 patients with an age average of 51.7 years and the chronic kidney disease was observed in 175 cases. The dataset comprise twenty four variables, grouped into five main categories. The proposed model showed a good performance in the diagnosis of chronic kidney disease, since the

  18. Accurate approach for determining fresh-water carbonate (H2CO3(*)) alkalinity, using a single H3PO4 titration point.

    Birnhack, Liat; Sabach, Sara; Lahav, Ori

    2012-10-15

    A new, simple and accurate method is introduced for determining H(2)CO(3)(*) alkalinity in fresh waters dominated by the carbonate weak-acid system. The method relies on a single H(3)PO(4) dosage and two pH readings (acidic pH value target: pH~4.0). The computation algorithm is based on the concept that the overall alkalinity mass of a solution does not change upon the addition of a non-proton-accepting species. The accuracy of the new method was assessed batch-wise with both synthetic and actual tap waters and the results were compared to those obtained from two widely used alkalinity analysis methods (titration to pH~4.5 and the Gran titration method). The experimental results, which were deliberately obtained with simple laboratory equipment (glass buret, general-purpose pH electrode, magnetic stirrer) proved the method to be as accurate as the conventional methods at a wide range of alkalinity values (20-400 mg L(-1) as CaCO(3)). Analysis of the relative error attained in the proposed method as a function of the target (acidic) pH showed that at the range 4.0

  19. Computation on Information, Meaning and Representations. An Evolutionary Approach

    Menant, Christophe

    2011-01-01

    Understanding computation as “a process of the dynamic change of information” brings to look at the different types of computation and information. Computation of information does not exist alone by itself but is to be considered as part of a system that uses it for some given purpose. Information can be meaningless like a thunderstorm noise, it can be meaningful like an alert signal, or like the representation of a desired food. A thunderstorm noise participates to the generation of meani...

  20. Discovery of novel hydrogen storage materials: an atomic scale computational approach.

    Wolverton, C; Siegel, Donald J; Akbarzadeh, A R; Ozoliņš, V

    2008-02-13

    Practical hydrogen storage for mobile applications requires materials that exhibit high hydrogen densities, low decomposition temperatures, and fast kinetics for absorption and desorption. Unfortunately, no reversible materials are currently known that possess all of these attributes. Here we present an overview of our recent efforts aimed at developing a first-principles computational approach to the discovery of novel hydrogen storage materials. Such an approach requires several key capabilities to be effective: (i) accurate prediction of decomposition thermodynamics, (ii) prediction of crystal structures for unknown hydrides, and (iii) prediction of preferred decomposition pathways. We present examples that illustrate each of these three capabilities: (i) prediction of hydriding enthalpies and free energies across a wide range of hydride materials, (ii) prediction of low energy crystal structures for complex hydrides (such as Ca(AlH(4))(2) CaAlH(5), and Li(2)NH), and (iii) predicted decomposition pathways for Li(4)BN(3)H(10) and destabilized systems based on combinations of LiBH(4), Ca(BH(4))(2) and metal hydrides. For the destabilized systems, we propose a set of thermodynamic guidelines to help identify thermodynamically viable reactions. These capabilities have led to the prediction of several novel high density hydrogen storage materials and reactions. PMID:21693890

  1. Computational approaches and metrics required for formulating biologically realistic nanomaterial pharmacokinetic models

    The field of nanomaterial pharmacokinetics is in its infancy, with major advances largely restricted by a lack of biologically relevant metrics, fundamental differences between particles and small molecules of organic chemicals and drugs relative to biological processes involved in disposition, a scarcity of sufficiently rich and characterized in vivo data and a lack of computational approaches to integrating nanomaterial properties to biological endpoints. A central concept that links nanomaterial properties to biological disposition, in addition to their colloidal properties, is the tendency to form a biocorona which modulates biological interactions including cellular uptake and biodistribution. Pharmacokinetic models must take this crucial process into consideration to accurately predict in vivo disposition, especially when extrapolating from laboratory animals to humans since allometric principles may not be applicable. The dynamics of corona formation, which modulates biological interactions including cellular uptake and biodistribution, is thereby a crucial process involved in the rate and extent of biodisposition. The challenge will be to develop a quantitative metric that characterizes a nanoparticle's surface adsorption forces that are important for predicting biocorona dynamics. These types of integrative quantitative approaches discussed in this paper for the dynamics of corona formation must be developed before realistic engineered nanomaterial risk assessment can be accomplished. (paper)

  2. Computational approaches and metrics required for formulating biologically realistic nanomaterial pharmacokinetic models

    Riviere, Jim E.; Scoglio, Caterina; Sahneh, Faryad D.; Monteiro-Riviere, Nancy A.

    2013-01-01

    The field of nanomaterial pharmacokinetics is in its infancy, with major advances largely restricted by a lack of biologically relevant metrics, fundamental differences between particles and small molecules of organic chemicals and drugs relative to biological processes involved in disposition, a scarcity of sufficiently rich and characterized in vivo data and a lack of computational approaches to integrating nanomaterial properties to biological endpoints. A central concept that links nanomaterial properties to biological disposition, in addition to their colloidal properties, is the tendency to form a biocorona which modulates biological interactions including cellular uptake and biodistribution. Pharmacokinetic models must take this crucial process into consideration to accurately predict in vivo disposition, especially when extrapolating from laboratory animals to humans since allometric principles may not be applicable. The dynamics of corona formation, which modulates biological interactions including cellular uptake and biodistribution, is thereby a crucial process involved in the rate and extent of biodisposition. The challenge will be to develop a quantitative metric that characterizes a nanoparticle's surface adsorption forces that are important for predicting biocorona dynamics. These types of integrative quantitative approaches discussed in this paper for the dynamics of corona formation must be developed before realistic engineered nanomaterial risk assessment can be accomplished.

  3. The role of chemistry and pH of solid surfaces for specific adsorption of biomolecules in solution—accurate computational models and experiment

    Adsorption of biomolecules and polymers to inorganic nanostructures plays a major role in the design of novel materials and therapeutics. The behavior of flexible molecules on solid surfaces at a scale of 1–1000 nm remains difficult and expensive to monitor using current laboratory techniques, while playing a critical role in energy conversion and composite materials as well as in understanding the origin of diseases. Approaches to implement key surface features and pH in molecular models of solids are explained, and distinct mechanisms of peptide recognition on metal nanostructures, silica and apatite surfaces in solution are described as illustrative examples. The influence of surface energies, specific surface features and protonation states on the structure of aqueous interfaces and selective biomolecular adsorption is found to be critical, comparable to the well-known influence of the charge state and pH of proteins and surfactants on their conformations and assembly. The representation of such details in molecular models according to experimental data and available chemical knowledge enables accurate simulations of unknown complex interfaces in atomic resolution in quantitative agreement with independent experimental measurements. In this context, the benefits of a uniform force field for all material classes and of a mineral surface structure database are discussed. (paper)

  4. Accurate Finite Difference Algorithms

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  5. Computer Science Contests for Secondary School Students: Approaches to Classification

    Wolfgang POHL

    2006-04-01

    Full Text Available The International Olympiad in Informatics currently provides a model which is imitated by the majority of contests for secondary school students in Informatics or Computer Science. However, the IOI model can be criticized, and alternative contest models exist. To support the discussion about contests in Computer Science, several dimensions for characterizing and classifying contests are suggested.

  6. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Vladimiras Dolgopolovas; Valentina Dagienė; Saulius Minkevičius; Leonidas Sakalauskas

    2015-01-01

    The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same ti...

  7. Development of Computer Science Disciplines - A Social Network Analysis Approach

    Pham, Manh Cuong; Jarke, Matthias

    2011-01-01

    In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and workshop proceedings. That results in an imprecise and incomplete analysis of the computer science knowledge. This paper presents an analysis on the computer science knowledge network constructed from all types of publications, aiming at providing a complete view of computer science research. Based on the combination of two important digital libraries (DBLP and CiteSeerX), we study the knowledge network created at journal/conference level using citation linkage, to identify the development of sub-disciplines. We investiga...

  8. Mapping Agricultural Fields in Sub-Saharan Africa with a Computer Vision Approach

    Debats, S. R.; Luo, D.; Estes, L. D.; Fuchs, T.; Caylor, K. K.

    2014-12-01

    Sub-Saharan Africa is an important focus for food security research, because it is experiencing unprecedented population growth, agricultural activities are largely dominated by smallholder production, and the region is already home to 25% of the world's undernourished. One of the greatest challenges to monitoring and improving food security in this region is obtaining an accurate accounting of the spatial distribution of agriculture. Households are the primary units of agricultural production in smallholder communities and typically rely on small fields of less than 2 hectares. Field sizes are directly related to household crop productivity, management choices, and adoption of new technologies. As population and agriculture expand, it becomes increasingly important to understand both the distribution of field sizes as well as how agricultural communities are spatially embedded in the landscape. In addition, household surveys, a common tool for tracking agricultural productivity in Sub-Saharan Africa, would greatly benefit from spatially explicit accounting of fields. Current gridded land cover data sets do not provide information on individual agricultural fields or the distribution of field sizes. Therefore, we employ cutting edge approaches from the field of computer vision to map fields across Sub-Saharan Africa, including semantic segmentation, discriminative classifiers, and automatic feature selection. Our approach aims to not only improve the binary classification accuracy of cropland, but also to isolate distinct fields, thereby capturing crucial information on size and geometry. Our research focuses on the development of descriptive features across scales to increase the accuracy and geographic range of our computer vision algorithm. Relevant data sets include high-resolution remote sensing imagery and Landsat (30-m) multi-spectral imagery. Training data for field boundaries is derived from hand-digitized data sets as well as crowdsourcing.

  9. What Computational Approaches Should be Taught for Physics?

    Landau, Rubin

    2005-03-01

    The standard Computational Physics courses are designed for upper-level physics majors who already have some computational skills. We believe that it is important for first-year physics students to learn modern computing techniques that will be useful throughout their college careers, even before they have learned the math and science required for Computational Physics. To teach such Introductory Scientific Computing courses requires that some choices be made as to what subjects and computer languages wil be taught. Our survey of colleagues active in Computational Physics and Physics Education show no predominant choice, with strong positions taken for the compiled languages Java, C, C++ and Fortran90, as well as for problem-solving environments like Maple and Mathematica. Over the last seven years we have developed an Introductory course and have written up those courses as text books for others to use. We will describe our model of using both a problem-solving environment and a compiled language. The developed materials are available in both Maple and Mathaematica, and Java and Fortran90ootnotetextPrinceton University Press, to be published; www.physics.orst.edu/˜rubin/IntroBook/.

  10. Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach

    Parent, Bernard; Macheret, Sergey O.; Shneider, Mikhail N.

    2015-11-01

    Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant. This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.

  11. An introduction to statistical computing a simulation-based approach

    Voss, Jochen

    2014-01-01

    A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems.  Sampling-based simulation techniques are now an invaluable tool for exploring statistical models.  This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods.  It also includes some advanced met

  12. Loss tolerant one-way quantum computation -- a horticultural approach

    Varnava, M; Rudolph, T; Varnava, Michael; Browne, Daniel E.; Rudolph, Terry

    2005-01-01

    We introduce a scheme for fault tolerantly dealing with losses in cluster state computation that can tolerate up to 50% qubit loss. This is achieved passively - no coherent measurements or coherent correction is required. We then use this procedure within a specific linear optical quantum computation proposal to show that: (i) given perfect sources, detector inefficiencies of up to 50% can be tolerated and (ii) given perfect detectors, the purity of the photon source (overlap of the photonic wavefunction with the desired single mode) need only be greater than 66.6% for efficient computation to be possible.

  13. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  14. Computational Approaches for Probing the Formation of Atmospheric Molecular Clusters

    Elm, Jonas

    This thesis presents the investigation of atmospheric molecular clusters using computational methods. Previous investigations have focused on solving problems related to atmospheric nucleation, and have not been targeted at the performance of the applied methods. This thesis focuses on assessing...

  15. A computational approach to George Boole's discovery of mathematical logic

    Ledesma, Luis de; Pérez, Aurora; Borrajo, Daniel; Laita, Luis M.

    1997-01-01

    This paper reports a computational model of Boole's discovery of Logic as a part of Mathematics. George Boole (1815–1864) found that the symbols of Logic behaved as algebraic symbols, and he then rebuilt the whole contemporary theory of Logic by the use of methods such as the solution of algebraic equations. Study of the different historical factors that influenced this achievement has served as background for our two main contributions: a computational representation of Boole's Logic before ...

  16. AN ETHICAL ASSESSMENT OF COMPUTER ETHICS USING SCENARIO APPROACH

    Maslin Masrom; Zuraini Ismail; Ramlah Hussein

    2010-01-01

    Ethics refers to a set of rules that define right and wrong behavior, used for moral decision making. In this case, computer ethics is one of the major issues in information technology (IT) and information system (IS). The ethical behaviour of IT students and professionals need to be studied in an attempt to reduce many unethical practices such as software piracy, hacking, and software intellectual property violations. This paper attempts to address computer-related scenarios that can be used...

  17. Development of Computer Science Disciplines - A Social Network Analysis Approach

    Pham, Manh Cuong; Klamma, Ralf; Jarke, Matthias

    2011-01-01

    In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and ...

  18. An Econometric Approach of Computing Competitiveness Index in Human Capital

    Salahodjaev, Raufhon; Nazarov, Zafar

    2013-01-01

    The aim of this paper is to provide methodology of estimating one of the components (pillar) of the Global Competitiveness Index (GCI), health and primary education (HPE) pillar for not included in the Global Competitiveness Report countries using conventional econometric techniques. Specifically, using the weighted least square and bootstrapping methods, we enable to compute the HPE for two countries of the former Soviet Union, Uzbekistan and Belarus and then compare the computed...

  19. Collaboration in computer science: a network science approach. Part I

    Franceschet, Massimo

    2010-01-01

    Co-authorship in publications within a discipline uncovers interesting properties of the analysed field. We represent collaboration in academic papers of computer science in terms of differently grained networks, including those sub-networks that emerge from conference and journal co-authorship only. We take advantage of the network science paraphernalia to take a picture of computer science collaboration including all papers published in the field since 1936. We investigate typical bibliomet...

  20. AVES: A Computer Cluster System approach for INTEGRAL Scientific Analysis

    Federici, M.; Martino, B. L.; Natalucci, L.; Umbertini, P.

    The AVES computing system, based on an "Cluster" architecture is a fully integrated, low cost computing facility dedicated to the archiving and analysis of the INTEGRAL data. AVES is a modular system that uses the software resource manager (SLURM) and allows almost unlimited expandibility (65,536 nodes and hundreds of thousands of processors); actually is composed by 30 Personal Computers with Quad-Cores CPU able to reach the computing power of 300 Giga Flops (300x10{9} Floating point Operations Per Second), with 120 GB of RAM and 7.5 Tera Bytes (TB) of storage memory in UFS configuration plus 6 TB for users area. AVES was designed and built to solve growing problems raised from the analysis of the large data amount accumulated by the INTEGRAL mission (actually about 9 TB) and due to increase every year. The used analysis software is the OSA package, distributed by the ISDC in Geneva. This is a very complex package consisting of dozens of programs that can not be converted to parallel computing. To overcome this limitation we developed a series of programs to distribute the workload analysis on the various nodes making AVES automatically divide the analysis in N jobs sent to N cores. This solution thus produces a result similar to that obtained by the parallel computing configuration. In support of this we have developed tools that allow a flexible use of the scientific software and quality control of on-line data storing. The AVES software package is constituted by about 50 specific programs. Thus the whole computing time, compared to that provided by a Personal Computer with single processor, has been enhanced up to a factor 70.

  1. New Approaches to Quantum Computing using Nuclear Magnetic Resonance Spectroscopy

    The power of a quantum computer (QC) relies on the fundamental concept of the superposition in quantum mechanics and thus allowing an inherent large-scale parallelization of computation. In a QC, binary information embodied in a quantum system, such as spin degrees of freedom of a spin-1/2 particle forms the qubits (quantum mechanical bits), over which appropriate logical gates perform the computation. In classical computers, the basic unit of information is the bit, which can take a value of either 0 or 1. Bits are connected together by logic gates to form logic circuits to implement complex logical operations. The expansion of modern computers has been driven by the developments of faster, smaller and cheaper logic gates. As the size of the logic gates become smaller toward the level of atomic dimensions, the performance of such a system is no longer considered classical but is rather governed by quantum mechanics. Quantum computers offer the potentially superior prospect of solving computational problems that are intractable to classical computers such as efficient database searches and cryptography. A variety of algorithms have been developed recently, most notably Shor's algorithm for factorizing long numbers into prime factors in polynomial time and Grover's quantum search algorithm. The algorithms that were of only theoretical interest as recently, until several methods were proposed to build an experimental QC. These methods include, trapped ions, cavity-QED, coupled quantum dots, Josephson junctions, spin resonance transistors, linear optics and nuclear magnetic resonance. Nuclear magnetic resonance (NMR) is uniquely capable of constructing small QCs and several algorithms have been implemented successfully. NMR-QC differs from other implementations in one important way that it is not a single QC, but a statistical ensemble of them. Thus, quantum computing based on NMR is considered as ensemble quantum computing. In NMR quantum computing, the spins with

  2. AN ETHICAL ASSESSMENT OF COMPUTER ETHICS USING SCENARIO APPROACH

    Maslin Masrom

    2010-06-01

    Full Text Available Ethics refers to a set of rules that define right and wrong behavior, used for moral decision making. In this case, computer ethics is one of the major issues in information technology (IT and information system (IS. The ethical behaviour of IT students and professionals need to be studied in an attempt to reduce many unethical practices such as software piracy, hacking, and software intellectual property violations. This paper attempts to address computer-related scenarios that can be used to examine the computer ethics. The computer-related scenario consists of a short description of an ethical situation whereby the subject of the study such as IT professionals or students, then rate the ethics of the scenario, namely attempt to identify the ethical issues involved. This paper also reviews several measures of computer ethics in different setting. The perceptions of various dimensions of ethical behaviour in IT that are related to the circumstances of the ethical scenario are also presented.

  3. Computational strategies and improvements in the linear algebraic variational approach to rearrangement scattering

    Schwenke, David W.; Mladenovic, Mirjana; Zhao, Meishan; Truhlar, Donald G.; Sun, Yan

    1988-01-01

    The computational steps in calculating quantum mechanical reactive scattering amplitudes by the L2 generalized Newton variational principle are discussed with emphasis on computational strategies and recent improvements that make the calculations more efficient. Special emphasis is placed on quadrature techniques, storage management strategies, use of symmetry, and boundary conditions. It is concluded that an efficient implementation of these procedures provides a powerful algorithm for the accurate solution of the Schroedinger equation for rearrangements.

  4. A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises

    Kuada, Eric; Olesen, Henning

    2011-01-01

    This paper proposes a social network approach to the provisioning and management of cloud computing services termed Opportunistic Cloud Computing Services (OCCS), for enterprises; and presents the research issues that need to be addressed for its implementation. We hypothesise that OCCS will facilitate the adoption process of cloud computing services by enterprises. OCCS deals with the concept of enterprises taking advantage of cloud computing services to meet their business needs without hav...

  5. Quantum Computing Approach to Nonrelativistic and Relativistic Molecular Energy Calculations

    Veis, Libor; Pittner, Jiří

    Hoboken : John Wiley, 2014 - (Kais, S.), s. 107-135 ISBN 978-1-118-49566-7. - (Advances in Chemical Physics. Vol. 154) R&D Projects: GA ČR GA203/08/0626 Institutional support: RVO:61388955 Keywords : full configuration interaction (FCI) calculations * nonrelativistic molecular hamiltonians * quantum computing Subject RIV: CF - Physical ; Theoretical Chemistry

  6. Simulation of quantum computation : A deterministic event-based approach

    Michielsen, K; De Raedt, K; De Raedt, H

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  7. Statistical Learning of Phonetic Categories: Insights from a Computational Approach

    McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.

    2009-01-01

    Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…

  8. R for cloud computing an approach for data scientists

    Ohri, A

    2014-01-01

    R for Cloud Computing looks at some of the tasks performed by business analysts on the desktop (PC era)  and helps the user navigate the wealth of information in R and its 4000 packages as well as transition the same analytics using the cloud.  With this information the reader can select both cloud vendors  and the sometimes confusing cloud ecosystem as well  as the R packages that can help process the analytical tasks with minimum effort and cost, and maximum usefulness and customization. The use of Graphical User Interfaces (GUI)  and Step by Step screenshot tutorials is emphasized in this book to lessen the famous learning curve in learning R and some of the needless confusion created in cloud computing that hinders its widespread adoption. This will help you kick-start analytics on the cloud including chapters on cloud computing, R, common tasks performed in analytics, scrutiny of big data analytics, and setting up and navigating cloud providers. Readers are exposed to a breadth of cloud computing ch...

  9. A Computational Approach to Quantifiers as an Explanation for Some Language Impairments in Schizophrenia

    Zajenkowski, Marcin; Styla, Rafal; Szymanik, Jakub

    2011-01-01

    We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only…

  10. Computing pKa Values with a Mixing Hamiltonian Quantum Mechanical/Molecular Mechanical Approach.

    Liu, Yang; Fan, Xiaoli; Jin, Yingdi; Hu, Xiangqian; Hu, Hao

    2013-09-10

    Accurate computation of the pKa value of a compound in solution is important but challenging. Here, a new mixing quantum mechanical/molecular mechanical (QM/MM) Hamiltonian method is developed to simulate the free-energy change associated with the protonation/deprotonation processes in solution. The mixing Hamiltonian method is designed for efficient quantum mechanical free-energy simulations by alchemically varying the nuclear potential, i.e., the nuclear charge of the transforming nucleus. In pKa calculation, the charge on the proton is varied in fraction between 0 and 1, corresponding to the fully deprotonated and protonated states, respectively. Inspired by the mixing potential QM/MM free energy simulation method developed previously [H. Hu and W. T. Yang, J. Chem. Phys. 2005, 123, 041102], this method succeeds many advantages of a large class of λ-coupled free-energy simulation methods and the linear combination of atomic potential approach. Theory and technique details of this method, along with the calculation results of the pKa of methanol and methanethiol molecules in aqueous solution, are reported. The results show satisfactory agreement with the experimental data. PMID:26592414

  11. Modeling and computational issues far air/water quality problems: a grid computing approach

    In this paper we report some results on the design of a grid computing application in the field of environmental sciences. The case study concerns the integration of several independent computational models (weather, air and water quality and sea wave and currents) aver a grid computing infrastructure. The aim of the project is the development of an efficient, high-performance and general purpose distributed laboratory far environmental modeling. Expected end users may be either researchers in computational environmental sciences, whom a standard interface to access existing and distributed resources (database, models, visualization tools, parallel computers and virtual organisation's collaborative community facilities) is provided, or ordinary citizens who can access the results of operational runs of the integrated system far obtaining short-scale forecasts using a web-based interface. Presently, the application continuously supplies real-time forecasts and scenario analysis far both weather and air quality and it is interactively serviceable via a dedicated web portal

  12. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  13. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  14. Distance Based Asynchronous Recovery Approach In Mobile Computing Environment

    Yogita Khatri,

    2012-06-01

    Full Text Available A mobile computing system is a distributed system in which at least one of the processes is mobile. They are constrained by lack of stable storage, low network bandwidth, mobility, frequent disconnection andlimited battery life. Checkpointing is one of the commonly used techniques to provide fault tolerance in mobile computing environment. In order to suit the mobile environment a distance based recovery schemeis proposed which is based on checkpointing and message logging. After the system recovers from failures, only the failed processes rollback and restart from their respective recent checkpoints, independent of the others. The salient feature of this scheme is to reduce the transfer and recovery cost. While the mobile host moves with in a specific range, recovery information is not moved and thus only be transferred nearby if the mobile host moves out of certain range.

  15. Computational Model of Music Sight Reading: A Reinforcement Learning Approach

    Yahya, Keyvan

    2010-01-01

    Although the Music Sight Reading process usually has been studied from the cognitive or neurological view points, but the computational learning methods like the Reinforcement Learning have not yet been used to modeling of such processes. In this paper with regards to essential properties of our specific problem, we consider the value function concept and will indicate that the optimum policy can be obtained by the method we offer without to be getting involved with computing of the complex value functions which are in most of cases inexact. Also, the algorithm we will offer here is somehow a PDE based algorithm which is associated with a stochastic optimization programming and we consider that in this case, this one is more applicable than the normative algorithms like temporal difference method.

  16. Computational approaches to identify functional genetic variants in cancer genomes

    Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris; Ritchie, Graham R.S.; Creixell, Pau; Karchin, Rachel; Vazquez, Miguel; Fink, J. Lynn; Kassahn, Karin S.; Pearson, John V.; Bader, Gary; Boutros, Paul C.; Muthuswamy, Lakshmi; Ouellette, B.F. Francis; Reimand, Jüri; Linding, Rune; Shibata, Tatsuhiro; Valencia, Alfonso; Butler, Adam; Dronov, Serge; Flicek, Paul; Shannon, Nick B.; Carter, Hannah; Ding, Li; Sander, Chris; Stuart, Josh M.; Stein, Lincoln D.; Lopez-Bigas, Nuria

    2014-01-01

    The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor, but only a minority drive tumor progression. We present the result of discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype. PMID:23900255

  17. A Dynamic Bayesian Approach to Computational Laban Shape Quality Analysis

    Pavithra Sampath; Gang Qian; Ellen Campana; Todd Ingalls; Jodi James; Stjepan Rajko; Jessica Mumford; Harvey Thornburg; Dilip Swaminathan; Bo Peng

    2009-01-01

    Laban movement analysis (LMA) is a systematic framework for describing all forms of human movement and has been widely applied across animation, biomedicine, dance, and kinesiology. LMA (especially Effort/Shape) emphasizes how internal feelings and intentions govern the patterning of movement throughout the whole body. As we argue, a complex understanding of intention via LMA is necessary for human-computer interaction to become embodied in ways that resemble interaction in the physical world...

  18. Computational approaches to identify functional genetic variants in cancer genomes

    Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris;

    2013-01-01

    The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor but only a minority of these drive tumor progression. We present the result of discu...... discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype....

  19. Variations of geometric invariant quotients for pairs, a computational approach

    Gallardo, Patricio; Martinez-Garcia, Jesus

    2016-01-01

    We study, from a computational viewpoint, the GIT compactifications of pairs formed by a hypersurface and a hyperplane. We provide a general setting and algorithms to calculate all polarizations which give different GIT quotients, the finite number of one-parameter subgroups required to detect the lack of stability, and all maximal orbits of non stable pairs. Our algorithms have been fully implemented in Python for all dimensions and degrees. We applied our work with the case of cubic surface...

  20. A learning strategy approach for teaching novice computer programmers

    Begley, Donald D.

    1984-01-01

    The purpose of this thesis is to investigate carious learning strategies and present some suggested applications for the teaching of computer programming to Marine Corps entry level programmers. These learning strategies are used to develop a cognitively designed structure for the teaching of the software engineering process. This structure is designed so that programmers could have readily available in their thinking process modern software engineering goals and principles that will ultima...

  1. SOFT COMPUTING APPROACH TO PREDICT INTRACRANIAL PRESSURE VALUES

    Mario Versaci

    2014-01-01

    Full Text Available The estimation and the prediction of the values related to the Intracranial Pression (ICP represents an important step for the evaluation of the compliance of the human brain, above all in those cases in which the increase of the ICP values determines high risk conditions for the patient. The regular therapy is neuro-surgical but, waiting for it, it is needed an aimed pharmacological therapy leading to an overload of the kidneys’ functionality. Thus, it becomes evident the necessity to set an effective and efficient procedure for the prediction of the ICP values with a suitable time recordings to mark the systematic pharmacological action addressed towards really necessary deliverings. The prediction techniques most commonly used in the literature, while providing a good window of time, are characterized by heavy computational complexity unappetizing to real time applications and technology transfer. In addition, ICP sampling techniques are not free from uncertainties due to affected elements (breath, heartbeat, voluntary/involuntary movement requesting the manipulation of uncertain and imprecise data. Thus, the choice of predictive techniques of soft computing type appears reasonable firstly, because it manipulates data effectively with uncertainty and /or imprecision and, secondly, for the same time frame predictive requires are duce computational load. In this study the author presents a study of the prediction of the ICP values through a two factors fuzzy time series comparing the results with more sophisticated techniques.

  2. Computational Approaches to Viral Evolution and Rational Vaccine Design

    Bhattacharya, Tanmoy

    2006-10-01

    Viral pandemics, including HIV, are a major health concern across the world. Experimental techniques available today have uncovered a great wealth of information about how these viruses infect, grow, and cause disease; as well as how our body attempts to defend itself against them. Nevertheless, due to the high variability and fast evolution of many of these viruses, the traditional method of developing vaccines by presenting a heuristically chosen strain to the body fails and an effective intervention strategy still eludes us. A large amount of carefully curated genomic data on a number of these viruses are now available, often annotated with disease and immunological context. The availability of parallel computers has now made it possible to carry out a systematic analysis of this data within an evolutionary framework. I will describe, as an example, how computations on such data has allowed us to understand the origins and diversification of HIV, the causative agent of AIDS. On the practical side, computations on the same data is now being used to inform choice or defign of optimal vaccine strains.

  3. Computational Approach to Diarylprolinol-Silyl Ethers in Aminocatalysis.

    Halskov, Kim Søholm; Donslund, Bjarke S; Paz, Bruno Matos; Jørgensen, Karl Anker

    2016-05-17

    Asymmetric organocatalysis has witnessed a remarkable development since its "re-birth" in the beginning of the millenium. In this rapidly growing field, computational investigations have proven to be an important contribution for the elucidation of mechanisms and rationalizations of the stereochemical outcomes of many of the reaction concepts developed. The improved understanding of mechanistic details has facilitated the further advancement of the field. The diarylprolinol-silyl ethers have since their introduction been one of the most applied catalysts in asymmetric aminocatalysis due to their robustness and generality. Although aminocatalytic methods at first glance appear to follow relatively simple mechanistic principles, more comprehensive computational studies have shown that this notion in some cases is deceiving and that more complex pathways might be operating. In this Account, the application of density functional theory (DFT) and other computational methods on systems catalyzed by the diarylprolinol-silyl ethers is described. It will be illustrated how computational investigations have shed light on the structure and reactivity of important intermediates in aminocatalysis, such as enamines and iminium ions formed from aldehydes and α,β-unsaturated aldehydes, respectively. Enamine and iminium ion catalysis can be classified as HOMO-raising and LUMO-lowering activation modes. In these systems, the exclusive reactivity through one of the possible intermediates is often a requisite for achieving high stereoselectivity; therefore, the appreciation of subtle energy differences has been vital for the efficient development of new stereoselective reactions. The diarylprolinol-silyl ethers have also allowed for novel activation modes for unsaturated aldehydes, which have opened up avenues for the development of new remote functionalization reactions of poly-unsaturated carbonyl compounds via di-, tri-, and tetraenamine intermediates and vinylogous iminium ions

  4. Computational morphology a computational geometric approach to the analysis of form

    Toussaint, GT

    1988-01-01

    Computational Geometry is a new discipline of computer science that deals with the design and analysis of algorithms for solving geometric problems. There are many areas of study in different disciplines which, while being of a geometric nature, have as their main component the extraction of a description of the shape or form of the input data. This notion is more imprecise and subjective than pure geometry. Such fields include cluster analysis in statistics, computer vision and pattern recognition, and the measurement of form and form-change in such areas as stereology and developmental biolo

  5. Workflow Scheduling in Grid Computing Environment using a Hybrid GAACO Approach

    Sathish, Kuppani; RamaMohan Reddy, A.

    2016-06-01

    In recent trends, grid computing is one of the emerging areas in computing platform which supports parallel and distributed environments. The main problem for grid computing is scheduling of workflows in terms of user specifications is a stimulating task and it also impacts the performance. This paper proposes a hybrid GAACO approach, which is a combination of Genetic Algorithm and Ant Colony Optimization Algorithm. The GAACO approach proposes different types of scheduling heuristics for the grid environment. The main objective of this approach is to satisfy all the defined constraints and user parameters.

  6. Computational Thinking - A New Approach for Teaching Computer Science to College Freshmen

    Wärmedal, Linnea

    2014-01-01

    Felix, qui potuit rerum cognoscere causas The traditional way of introducing computer science to college freshmen is through a programming course. Such a course often account for programming, problem solving, efficiency, debugging, memory allocation and complexity. The student is presented to all of this within the first course in computer science. To be introduced to all these concepts during the first course could be compared to learning fundamental arithmetic alongside the mean value theor...

  7. Computer-assisted modeling: Contributions of computational approaches to elucidating macromolecular structure and function: Final report

    The Committee, asked to provide an assessment of computer-assisted modeling of molecular structure, has highlighted the signal successes and the significant limitations for a broad panoply of technologies and has projected plausible paths of development over the next decade. As with any assessment of such scope, differing opinions about present or future prospects were expressed. The conclusions and recommendations, however, represent a consensus of our views of the present status of computational efforts in this field

  8. Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach

    Collier, Nathan

    2011-05-14

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.

  9. [Contribution of computers to pharmacokinetics, Bayesian approach and population pharmacokinetics].

    Hecquet, B

    1995-12-01

    A major objective for pharmacokineticians is to help practicians to define drug administration protocols. Protocols are generally designed for all the patients but inter individual variability would need monitoring for each patient. Computers are widely used to determine pharmacokinetic parameters and to try to individualize drug administration. Severals examples are summarily described: terminal half-life determination by regression; model fitting to experimental data; Bayesian statistics for individual dose adaptation; population pharmacokinetic methods for parameter evaluation. These methods do not replace the pharmacokinetician thought but could make possible drug administration taking into account individual characteristics. PMID:8680074

  10. Essential algorithms a practical approach to computer algorithms

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  11. Presentation an approach for scientific workflow distribution on cloud computing data centers servers to optimization usage of computational resource

    Ahmad Faraahi

    2012-01-01

    Full Text Available In a distributed system Timing and mapping the priority of tasks among processors is one of the issues attracted most of attention to itself. This issue consists of mapping a DAG with a set of tasks on a number of parallel processors and its purpose is allocating tasks to the available processors in order to satisfy the needs of priority and decency of tasks, and also to minimize the duration time of execution in total graph . Cloud computing system is one of the distributed systems which supplied a suitable condition to execute these kinds of applications, but according to the payment based on the rate of usage in this system, we should also consider reducing computation costs as the other purpose. In this article, well represent an approach to reduce the computation costs and to increase the profitability of computation power in Cloud computing system. To some possible extent, simulations show that the represented approach decreases the costs of using computation resources.

  12. Integrated Post-Experiment Monoisotopic Mass Refinement: An Integrated Approach to Accurately Assign Monoisotopic Precursor Masses to Tandem Mass Spectrometric Data

    Jung, Hee-Jung; Purvine, Samuel O.; Kim, Hokeun; Petyuk, Vladislav A.; Hyung, Seok-Won; Monroe, Matthew E.; Mun, Dong-Gi; Kim, Kyong-Chul; Park, Jong-Moon; Kim, Su-Jin; Tolic, Nikola; Slysz, Gordon W.; Moore, Ronald J.; Zhao, Rui; Adkins, Joshua N.; Anderson, Gordon A.; Lee, Hookeun; Camp, David G.; Yu, Myeong-Hee; Smith, Richard D.; Lee, Sang-Won

    2010-10-15

    Accurate assignment of monoisotopic precursor masses to tandem mass spectrometric (MS/MS) data is a fundamental and critically important step for successful peptide identifications in mass spectrometry based proteomics. Here we describe an integrated approach that combines three previously reported methods of treating MS/MS data for precursor mass refinement. This combined method, “integrated Post-Experiment Monoisotopic Mass Refinement” (iPE MMR), integrates steps: 1) generation of refined MS/MS data by DeconMSn, 2) additional refinement of the resultant MS/MS data by a modified version of PE-MMR, and 3) elimination of systematic errors of precursor masses using DtaRefinery. iPE-MMR is the first method that utilizes all MS information from multiple MS scans of a precursor ion and multiple charge states of it in an MS scan to determine precursor mass. By combining the synergistic features of each of method, iPE MMR increases sensitivity in peptide identification and provides increased accuracy when applied to complex high-throughput proteomics data. iPE MMR also allows incorporating additional data processing step(s) or skipping step(s), if necessary, to enable new developments or applications of the tools, as each step of iPE MMR produces output data in a common and conventional format used in proteomics data processing.

  13. One-loop kink mass shifts: a computational approach

    Alonso-Izquierdo, Alberto

    2011-01-01

    In this paper we develop a procedure to compute the one-loop quantum correction to the kink masses in generic (1+1)-dimensional one-component scalar field theoretical models. The procedure uses the generalized zeta function regularization method helped by the Gilkey-de Witt asymptotic expansion of the heat function via Mellin's transform. We find a formula for the one-loop kink mass shift that depends only on the part of the energy density with no field derivatives, evaluated by means of a symbolic software algorithm that automates the computation. The improved algorithm with respect to earlier work in this subject has been tested in the sine-Gordon and $\\lambda(\\phi)_2^4$ models. The quantum corrections of the sG-soliton and $\\lambda(\\phi^4)_2$-kink masses have been estimated with a relative error of 0.00006% and 0.00007% respectively. Thereafter, the algorithm is applied to other models. In particular, an interesting one-parametric family of double sine-Gordon models interpolating between the ordinary sine-...

  14. Analysis of diabetic retinopathy biomarker VEGF gene by computational approaches

    Jayashree Sadasivam

    2012-08-01

    Full Text Available Diabetic retinopathy, the most common diabetic eye disease, is caused by changes in the blood vessels of the retina which remains the major cause. It is characterized by vascular permeability and increased tissue ischemia and angiogenesis. One of the biomarker for Diabetic retinopathy has been identified as Vascular Endothelial Growth Factor ( VEGF gene by computational analysis. VEGF is a sub-family of growth factors, the platelet-derived growth factor family of cystine-knot growth factors. They are important signalling proteins involved in both vasculogenesis and angiogenesis, Over expression of VEGF can cause vascular disease in the retina of the eye and other parts of the body. Drugs can inhibit VEGF and control or slowdown those disease. Computational analysis of VEGF with other  genes responsible for diabetic retinopathy were done  by aligning those genes by pair wise and multiple sequence alignments. MSA shows VEGF’s role in diabetic retinopathy and its  related with other genes and proteins responsible for pathogenesis of diabetic retinopathy. Also  the determination of the promoter and conserved domain of  VEGF gene  help us to identify  its expression levels. Thus molecular docking studies were carried out  to analyse the biomarker VEGF, that helps in treatment of diabetic retinopathy which is proliferative in nature due to uncontrolled angiogenesis. Normal 0 false false false EN-US X-NONE X-NONE

  15. Heat sink material selection in electronic devices by computational approach

    Geffroy, P.M. [S.P.C.T.S CNRS, ENSCI, Science des Procedes Ceramiques et de Traitements de Surface, 43 a 73 avenue Albert Thomas, 87065 Limoges (France); Mathias, J.D. [G.E.M.H., ENSCI, Groupe d' Etude des Materiaux Heterogenes, 43 a 73 avenue Albert Thomas, 87065 Limoges (France); Silvain, J.F. [I.C.M.C.B. CNRS, Institut de la Chimie et de la Matiere Condensee de Bordeaux, Universite de Bordeaux 1, 87 Avenue du Docteur Schweitzer, 33608 Pessac (France)

    2008-04-15

    Due to the increasing complexity and higher density of components in modern devices, reliability and lifetime are important issues in electronic packaging. Many material solutions have been suggested and tested for the reliability optimisation of electronic devices. This study presents methodical and numerical approaches for the selection of composite materials and compares the results of different actual solutions in terms of static and fatigue criteria. (Abstract Copyright [2008], Wiley Periodicals, Inc.)

  16. What is Intrinsic Motivation? A Typology of Computational Approaches

    Pierre-Yves Oudeyer; Frederic Kaplan

    2007-01-01

    Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approac...

  17. Computational approaches to protein inference in shotgun proteomics

    Li Yong; Radivojac Predrag

    2012-01-01

    Abstract Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of protein...

  18. Scaling, growth and cyclicity in biology: a new computational approach

    Gliozzi Antonio S; Delsanto Pier; Guiot Caterina

    2008-01-01

    Abstract Background The Phenomenological Universalities approach has been developed by P.P. Delsanto and collaborators during the past 2–3 years. It represents a new tool for the analysis of experimental datasets and cross-fertilization among different fields, from physics/engineering to medicine and social sciences. In fact, it allows similarities to be detected among datasets in totally different fields and acts upon them as a magnifying glass, enabling all the available information to be e...

  19. Engineering approach to model and compute electric power markets settlements

    Back-office accounting settlement activities are an important part of market operations in Independent System Operator (ISO) organizations. A potential way to measure ISO market design correctness is to analyze how well market price signals create incentives or penalties for creating an efficient market to achieve market design goals. Market settlement rules are an important tool for implementing price signals which are fed back to participants via the settlement activities of the ISO. ISO's are currently faced with the challenge of high volumes of data resulting from the increasing size of markets and ever-changing market designs, as well as the growing complexity of wholesale energy settlement business rules. This paper analyzed the problem and presented a practical engineering solution using an approach based on mathematical formulation and modeling of large scale calculations. The paper also presented critical comments on various differences in settlement design approaches to electrical power market design, as well as further areas of development. The paper provided a brief introduction to the wholesale energy market settlement systems and discussed problem formulation. An actual settlement implementation framework and discussion of the results and conclusions were also presented. It was concluded that a proper engineering approach to this domain can yield satisfying results by formalizing wholesale energy settlements. Significant improvements were observed in the initial preparation phase, scoping and effort estimation, implementation and testing. 5 refs., 2 figs

  20. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Micheal J.; Partridge, L. Donald [University of New Mexico, Albuquerque, NM; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing>10%5E16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m%5E3 and 3MW, giving the brain a 10%5E12 advantage in operations/s/W/cm%5E3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  1. Multiplexing Genetic and Nucleosome Positioning Codes: A Computational Approach.

    Eslami-Mossallam, Behrouz; Schram, Raoul D; Tompitak, Marco; van Noort, John; Schiessel, Helmut

    2016-01-01

    Eukaryotic DNA is strongly bent inside fundamental packaging units: the nucleosomes. It is known that their positions are strongly influenced by the mechanical properties of the underlying DNA sequence. Here we discuss the possibility that these mechanical properties and the concomitant nucleosome positions are not just a side product of the given DNA sequence, e.g. that of the genes, but that a mechanical evolution of DNA molecules might have taken place. We first demonstrate the possibility of multiplexing classical and mechanical genetic information using a computational nucleosome model. In a second step we give evidence for genome-wide multiplexing in Saccharomyces cerevisiae and Schizosacharomyces pombe. This suggests that the exact positions of nucleosomes play crucial roles in chromatin function. PMID:27272176

  2. Open-ended approaches to science assessment using computers

    Singley, Mark K.; Taft, Hessy L.

    1995-03-01

    We discuss the potential role of technology in evaluating learning outcomes in large-scale, widespread science assessments of the kind typically done at ETS, such as the GRE, or the College Board SAT II Subject Tests. We describe the current state-of-the-art in this area, as well as briefly outline the history of technology in large-scale science assessment and ponder possibilities for the future. We present examples from our own work in the domain of chemistry, in which we are designing problem solving interfaces and scoring programs for stoichiometric and other kinds of quantitative problem solving. We also present a new scientific reasoning item type that we are prototyping on the computer. It is our view that the technological infrastructure for large-scale constructed response science assessment is well on its way to being available, although many technical and practical hurdles remain.

  3. A Computational Approach to Politeness with Application to Social Factors

    Danescu-Niculescu-Mizil, Cristian; Jurafsky, Dan; Leskovec, Jure; Potts, Christopher

    2013-01-01

    We propose a computational framework for identifying linguistic aspects of politeness. Our starting point is a new corpus of requests annotated for politeness, which we use to evaluate aspects of politeness theory and to uncover new interactions between politeness markers and context. These findings guide our construction of a classifier with domain-independent lexical and syntactic features operationalizing key components of politeness theory, such as indirection, deference, impersonalization and modality. Our classifier achieves close to human performance and is effective across domains. We use our framework to study the relationship between politeness and social power, showing that polite Wikipedia editors are more likely to achieve high status through elections, but, once elevated, they become less polite. We see a similar negative correlation between politeness and power on Stack Exchange, where users at the top of the reputation scale are less polite than those at the bottom. Finally, we apply our class...

  4. Crack Propagation in Honeycomb Cellular Materials: A Computational Approach

    Marco Paggi

    2012-02-01

    Full Text Available Computational models based on the finite element method and linear or nonlinear fracture mechanics are herein proposed to study the mechanical response of functionally designed cellular components. It is demonstrated that, via a suitable tailoring of the properties of interfaces present in the meso- and micro-structures, the tensile strength can be substantially increased as compared to that of a standard polycrystalline material. Moreover, numerical examples regarding the structural response of these components when subjected to loading conditions typical of cutting operations are provided. As a general trend, the occurrence of tortuous crack paths is highly favorable: stable crack propagation can be achieved in case of critical crack growth, whereas an increased fatigue life can be obtained for a sub-critical crack propagation.

  5. Computational Approaches for Modeling the Multiphysics in Pultrusion Process

    Carlone, P.; Baran, Ismet; Hattel, Jesper Henri;

    2013-01-01

    Pultrusion is a continuousmanufacturing process used to produce high strength composite profiles with constant cross section.The mutual interactions between heat transfer, resin flow and cure reaction, variation in the material properties, and stress/distortion evolutions strongly affect the...... process dynamics together with the mechanical properties and the geometrical precision of the final product. In the present work, pultrusion process simulations are performed for a unidirectional (UD) graphite/epoxy composite rod including several processing physics, such as fluid flow, heat transfer......, chemical reaction, and solid mechanics. The pressure increase and the resin flow at the tapered inlet of the die are calculated by means of a computational fluid dynamics (CFD) finite volume model. Several models, based on different homogenization levels and solution schemes, are proposed and compared for...

  6. Computational approaches for efficiently modelling of small atmospheric clusters

    Elm, Jonas; Mikkelsen, Kurt Valentin

    2014-01-01

    the basis set used in the geometry and frequency calculation from 6-311++G(3df,3pd) → 6-31++G(d,p) implies a significant speed-up in computational time and only leads to small errors in the thermal contribution to the Gibbs free energy and subsequent coupled cluster single point energy calculation.......Utilizing a comprehensive test set of 205 clusters of atmospheric relevance, we investigate how different DFT functionals (M06-2X, PW91, ωB97X-D) and basis sets (6-311++G(3df,3pd), 6-31++G(d,p), 6-31+G(d)) affect the thermal contribution to the Gibbs free energy and single point energy. Reducing...

  7. MADLVF: An Energy Efficient Resource Utilization Approach for Cloud Computing

    J.K. Verma

    2014-06-01

    Full Text Available Last few decades have remained the witness of steeper growth in demand for higher computational power. It is merely due to shift from the industrial age to Information and Communication Technology (ICT age which was marginally the result of digital revolution. Such trend in demand caused establishment of large-scale data centers situated at geographically apart locations. These large-scale data centers consume a large amount of electrical energy which results into very high operating cost and large amount of carbon dioxide (CO2 emission due to resource underutilization. We propose MADLVF algorithm to overcome the problems such as resource underutilization, high energy consumption, and large CO2 emissions. Further, we present a comparative study between the proposed algorithm and MADRS algorithms showing proposed methodology outperforms over the existing one in terms of energy consumption and the number of VM migrations.

  8. An Approach for Storage Security in Cloud Computing- A Survey

    W. Sharon Inbarani, G. Shenbaga Moorthy, C. Kumar Charlie Paul

    2013-01-01

    Full Text Available The many advantages of cloud computing areincreasingly attracting individuals and organizations tooutsource their data from local to remote cloud servers. Inaddition to cloud infrastructure and platform providers, such asAmazon, Google, and Microsoft, more and more cloudapplication providers are emerging which are dedicated tooffering more accessible and user friendly data storage servicesto cloud customers. Storing data in a third party’s cloud systemcauses serious concern over data confidentiality. Generalencryption schemes protect data confidentiality, but also limitthe functionality of the storage system. We propose a thresholdproxy re-encryption scheme and integrate it with decentralizederasure code such that a secure distributed storage system isformulated. The distributed storage system not only supportssecure and robust data storage and retrieval, but also lets a userforward his data in the storage servers to another user withoutretrieving the data back.

  9. A computational approach to the twin paradox in curved spacetime

    Fung, Kenneth K H; Lewis, Geraint F; Wu, Xiaofeng

    2016-01-01

    Despite being a major component in the teaching of special relativity, the twin `paradox' is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.

  10. Soft computing approach to pattern classification and object recognition a unified concept

    Ray, Kumar S

    2012-01-01

    Soft Computing Approach to Pattern Classification and Object Recognition establishes an innovative, unified approach to supervised pattern classification and model-based occluded object recognition. The book also surveys various soft computing tools, fuzzy relational calculus (FRC), genetic algorithm (GA) and multilayer perceptron (MLP) to provide a strong foundation for the reader. The supervised approach to pattern classification and model-based approach to occluded object recognition are treated in one framework , one based on either a conventional interpretation or a new interpretation of

  11. A computationally efficient approach for template matching-based image registration

    Vilas H Gaidhane; Yogesh V Hote; Vijander Singh

    2014-04-01

    Image registration using template matching is an important step in image processing. In this paper, a simple, robust and computationally efficient approach is presented. The proposed approach is based on the properties of a normalized covariance matrix. The main advantage of the proposed approach is that the image matching can be achieved without calculating eigenvalues and eigenvectors of a covariance matrix, hence reduces the computational complexity. The experimental results show that the proposed approach performs better in the presence of various noises and rigid geometric transformations.

  12. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    Chacón, Enrique, E-mail: echacon@icmm.csic.es [Instituto de Ciencia de Materiales de Madrid, CSIC, 28049 Madrid, Spain and Instituto de Ciencia de Materiales Nicolás Cabrera, Universidad Autónoma de Madrid, Madrid 28049 (Spain); Tarazona, Pedro, E-mail: pedro.tarazona@uam.es [Departamento de Física Teórica de la Materia Condensada, Condensed Matter Physics Center (IFIMAC), and Instituto de Ciencia de Materiales Nicolás Cabrera, Universidad Autónoma de Madrid, Madrid 28049 (Spain); Bresme, Fernando, E-mail: f.bresme@imperial.ac.uk [Department of Chemistry, Imperial College London, SW7 2AZ London (United Kingdom)

    2015-07-21

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke’s law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.

  13. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando

    2015-07-01

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.

  14. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke’s law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers

  15. DopeLearning: A Computational Approach to Rap Lyrics Generation

    Malmi, Eric; Takala, Pyry; Toivonen, Hannu; Raiko, Tapani; Gionis, Aristides

    2015-01-01

    Writing rap lyrics requires both creativity, to construct a meaningful and an interesting story, and lyrical skills, to produce complex rhyme patterns, which are the cornerstone of a good flow. We present a method for capturing both of these aspects. Our approach is based on two machine-learning techniques: the RankSVM algorithm, and a deep neural network model with a novel structure. For the problem of distinguishing the real next line from a randomly selected one, we achieve an 82 % accurac...

  16. Data analysis of asymmetric structures advanced approaches in computational statistics

    Saito, Takayuki

    2004-01-01

    Data Analysis of Asymmetric Structures provides a comprehensive presentation of a variety of models and theories for the analysis of asymmetry and its applications and provides a wealth of new approaches in every section. It meets both the practical and theoretical needs of research professionals across a wide range of disciplines and  considers data analysis in fields such as psychology, sociology, social science, ecology, and marketing. In seven comprehensive chapters this guide details theories, methods, and models for the analysis of asymmetric structures in a variety of disciplines and presents future opportunities and challenges affecting research developments and business applications.

  17. X-ray computed laminography: an approach of computed tomography for applications with limited access

    Gondrom, S.; Zhou, J.; Maisl, M.; Reiter, H.; Kroening, M.; Arnold, W. [Fraunhofer-Institut fuer Zerstoerungsfreie Pruefverfahren, Saarbruecken (Germany)

    1999-06-01

    Computed laminography (CL) is an image forming method of X-ray testing that yields images of object slices by a simple linear translation of the object relative to the tube-detector system. No relative movement of the X-ray tube to the detector is needed and in contrast to classical laminography all object layers can be obtained by one scan. Moreover CL turns out to be equivalent to a computed tomography (CT) with a limited angular region, which allows the reconstruction of object slices using slightly modified CT algorithms, resulting in a higher resolution compared with simple tomosynthesis in the case of classical laminography. (orig.) 12 refs.

  18. X-ray computed laminography: an approach of computed tomography for applications with limited access

    Computed laminography (CL) is an image forming method of X-ray testing that yields images of object slices by a simple linear translation of the object relative to the tube-detector system. No relative movement of the X-ray tube to the detector is needed and in contrast to classical laminography all object layers can be obtained by one scan. Moreover CL turns out to be equivalent to a computed tomography (CT) with a limited angular region, which allows the reconstruction of object slices using slightly modified CT algorithms, resulting in a higher resolution compared with simple tomosynthesis in the case of classical laminography. (orig.)

  19. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    Cai, C. [CEA, LIST, 91191 Gif-sur-Yvette, France and CNRS, SUPELEC, UNIV PARIS SUD, L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Rodet, T.; Mohammad-Djafari, A. [CNRS, SUPELEC, UNIV PARIS SUD, L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Legoupil, S. [CEA, LIST, 91191 Gif-sur-Yvette (France)

    2013-11-15

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  20. Perturbation approach for nuclear magnetic resonance solid-state quantum computation

    G. P. Berman

    2003-01-01

    Full Text Available A dynamics of a nuclear-spin quantum computer with a large number (L=1000 of qubits is considered using a perturbation approach. Small parameters are introduced and used to compute the error in an implementation of an entanglement between remote qubits, using a sequence of radio-frequency pulses. The error is computed up to the different orders of the perturbation theory and tested using exact numerical solution.

  1. A simplified approach to compute distribution matrices for the mapping method

    Singh, M.K.; Galaktionov, O.S.; Meijer, H.E.H.; Anderson, P.D.

    2009-01-01

    The mapping method has proven its efficiency as an analysis and optimization tool for mixing in many different flow devices. In this paper, we present a new approach to compute the coefficients of the distribution matrix, which is, both in terms of computational speed and complexity, more easy to im

  2. Granular computing and decision-making interactive and iterative approaches

    Chen, Shyi-Ming

    2015-01-01

    This volume is devoted to interactive and iterative processes of decision-making– I2 Fuzzy Decision Making, in brief. Decision-making is inherently interactive. Fuzzy sets help realize human-machine communication in an efficient way by facilitating a two-way interaction in a friendly and transparent manner. Human-centric interaction is of paramount relevance as a leading guiding design principle of decision support systems.   The volume provides the reader with an updated and in-depth material on the conceptually appealing and practically sound methodology and practice of I2 Fuzzy Decision Making. The book engages a wealth of methods of fuzzy sets and Granular Computing, brings new concepts, architectures and practice of fuzzy decision-making providing the reader with various application studies.   The book is aimed at a broad audience of researchers and practitioners in numerous disciplines in which decision-making processes play a pivotal role and serve as a vehicle to produce solutions to existing prob...

  3. Geometric approaches to computing 3D-landscape metrics

    M.-S. Stupariu

    2010-12-01

    Full Text Available The relationships between patterns and processes lie at the core of modern landscape ecology. These dependences can be quantified by using indices related to the patch-corridor-matrix model. This model conceptualizes landscapes as planar mosaics consisting of discrete patches. On the other hand, relief variability is a key factor for many ecological processes, and therefore these processes can be better modeled by integrating information concerning the third dimension of landscapes. This can be done by generating a triangle mesh which approximates the original terrain. The aim of this methodological paper is to introduce two new constructions of triangulations which replace a digital elevation model. These approximation methods are compared with the method which was already used in the computation of 3D-landscape metrics (firstly for parameterized surfaces and secondly for two landscape mosaics. The statistical analysis shows that all three methods are of almost equal sensitivity in reflecting the relationship between terrain ruggedness and the patches areas and perimeters. In particular, either of the methods can be used for approximating the real values of these basic metrics. However, the two methods introduced in this paper have the advantage of yielding continuous approximations of the terrain, and this fact could be useful for further developments.

  4. Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach

    Seth A. Herd

    2013-01-01

    Full Text Available We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC and basal ganglia (BG cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”. The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area.

  5. A computational toy model for shallow landslides: Molecular Dynamics approach

    Martelloni, Gianluca; Massaro, Emanuele

    2012-01-01

    The aim of this paper is to propose a 2D computational algorithm for modeling of the trigger and the propagation of shallow landslides caused by rainfall. We used a Molecular Dynamics (MD) inspired model, similar to discrete element method (DEM), that is suitable to model granular material and to observe the trajectory of single particle, so to identify its dynamical properties. We consider that the triggering of shallow landslides is caused by the decrease of the static friction along the sliding surface due to water infiltration by rainfall. Thence the triggering is caused by two following conditions: (a) a threshold speed of the particles and (b) a condition on the static friction, between particles and slope surface, based on the Mohr-Coulomb failure criterion. The latter static condition is used in the geotechnical model to estimate the possibility of landslide triggering. Finally the interaction force between particles is defined trough a potential that, in the absence of experimental data, we have mode...

  6. Partial order approach to compute shortest paths in multimodal networks

    Ensor, Andrew

    2011-01-01

    Many networked systems involve multiple modes of transport. Such systems are called multimodal, and examples include logistic networks, biomedical phenomena, manufacturing process and telecommunication networks. Existing techniques for determining optimal paths in multimodal networks have either required heuristics or else application-specific constraints to obtain tractable problems, removing the multimodal traits of the network during analysis. In this paper weighted coloured--edge graphs are introduced to model multimodal networks, where colours represent the modes of transportation. Optimal paths are selected using a partial order that compares the weights in each colour, resulting in a Pareto optimal set of shortest paths. This approach is shown to be tractable through experimental analyses for random and real multimodal networks without the need to apply heuristics or constraints.

  7. Driving profile modeling and recognition based on soft computing approach.

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers. PMID:19258199

  8. Analyses of Physcomitrella patens Ankyrin Repeat Proteins by Computational Approach

    Mahmood, Niaz; Tamanna, Nahid

    2016-01-01

    Ankyrin (ANK) repeat containing proteins are evolutionary conserved and have functions in crucial cellular processes like cell cycle regulation and signal transduction. In this study, through an entirely in silico approach using the first release of the moss genome annotation, we found that at least 54 ANK proteins are present in P. patens. Based on their differential domain composition, the identified ANK proteins were classified into nine subfamilies. Comparative analysis of the different subfamilies of ANK proteins revealed that P. patens contains almost all the known subgroups of ANK proteins found in the other angiosperm species except for the ones having the TPR domain. Phylogenetic analysis using full length protein sequences supported the subfamily classification where the members of the same subfamily almost always clustered together. Synonymous divergence (dS) and nonsynonymous divergence (dN) ratios showed positive selection for the ANK genes of P. patens which probably helped them to attain significant functional diversity during the course of evolution. Taken together, the data provided here can provide useful insights for future functional studies of the proteins from this superfamily as well as comparative studies of ANK proteins.

  9. Implementation of a Novel Educational Modeling Approach for Cloud Computing

    Sara Ouahabi

    2014-12-01

    Full Text Available The Cloud model is cost-effective because customers pay for their actual usage without upfront costs, and scalable because it can be used more or less depending on the customers’ needs. Due to its advantages, Cloud has been increasingly adopted in many areas, such as banking, e-commerce, retail industry, and academy. For education, cloud is used to manage the large volume of educational resources produced across many universities in the cloud. Keep interoperability between content in an inter-university Cloud is not always easy. Diffusion of pedagogical contents on the Cloud by different E-Learning institutions leads to heterogeneous content which influence the quality of teaching offered by university to teachers and learners. From this reason, comes the idea of using IMS-LD coupled with metadata in the cloud. This paper presents the implementation of our previous educational modeling by combining an application in J2EE with Reload editor that consists of modeling heterogeneous content in the cloud. The new approach that we followed focuses on keeping interoperability between Educational Cloud content for teachers and learners and facilitates the task of identification, reuse, sharing, adapting teaching and learning resources in the Cloud.

  10. Computational approaches for the classification of seed storage proteins.

    Radhika, V; Rao, V Sree Hari

    2015-07-01

    Seed storage proteins comprise a major part of the protein content of the seed and have an important role on the quality of the seed. These storage proteins are important because they determine the total protein content and have an effect on the nutritional quality and functional properties for food processing. Transgenic plants are being used to develop improved lines for incorporation into plant breeding programs and the nutrient composition of seeds is a major target of molecular breeding programs. Hence, classification of these proteins is crucial for the development of superior varieties with improved nutritional quality. In this study we have applied machine learning algorithms for classification of seed storage proteins. We have presented an algorithm based on nearest neighbor approach for classification of seed storage proteins and compared its performance with decision tree J48, multilayer perceptron neural (MLP) network and support vector machine (SVM) libSVM. The model based on our algorithm has been able to give higher classification accuracy in comparison to the other methods. PMID:26139889

  11. An approach to computing marginal land use change carbon intensities for bioenergy in policy applications

    Accurately characterizing the emissions implications of bioenergy is increasingly important to the design of regional and global greenhouse gas mitigation policies. Market-based policies, in particular, often use information about carbon intensity to adjust relative deployment incentives for different energy sources. However, the carbon intensity of bioenergy is difficult to quantify because carbon emissions can occur when land use changes to expand production of bioenergy crops rather than simply when the fuel is consumed as for fossil fuels. Using a long-term, integrated assessment model, this paper develops an approach for computing the carbon intensity of bioenergy production that isolates the marginal impact of increasing production of a specific bioenergy crop in a specific region, taking into account economic competition among land uses. We explore several factors that affect emissions intensity and explain these results in the context of previous studies that use different approaches. Among the factors explored, our results suggest that the carbon intensity of bioenergy production from land use change (LUC) differs by a factor of two depending on the region in which the bioenergy crop is grown in the United States. Assumptions about international land use policies (such as those related to forest protection) and crop yields also significantly impact carbon intensity. Finally, we develop and demonstrate a generalized method for considering the varying time profile of LUC emissions from bioenergy production, taking into account the time path of future carbon prices, the discount rate and the time horizon. When evaluated in the context of power sector applications, we found electricity from bioenergy crops to be less carbon-intensive than conventional coal-fired electricity generation and often less carbon-intensive than natural-gas fired generation. - Highlights: • Modeling methodology for assessing land use change emissions from bioenergy • Use GCAM

  12. Data science in R a case studies approach to computational reasoning and problem solving

    Nolan, Deborah

    2015-01-01

    Effectively Access, Transform, Manipulate, Visualize, and Reason about Data and ComputationData Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions. The book's collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including: Non-standar

  13. (Validation of) computational fluid dynamics modeling approach to evaluate VSC-17 dry storage cask thermal designs

    This paper presents results from a numerical analysis of the thermal evaluation of a Ventilated Concrete Storage Cask VSC-17 system. Three-dimensional simulations are performed for the VSC-17 system, and the results are compared to experimental data. The VSC-17 is a concrete-shielded spent nuclear fuel (SNF) cask system designed to contain 17 pressurized water reactor (PWR) fuel assemblies for storage and transportation. The system consists of a ventilated concrete cask (VCC) and a multi-assembly sealed basket (MSB). The VCC is a concrete cylindrical vessel, fabricated as a single piece and fitted with a flat plate at the bottom. The concrete cask provides structural support, shielding, and natural convection cooling for the MSB. The MSB has an outer steel shell and an inner fuel guide sleeve assembly that holds canisters containing spent fuel rods. Cooling airflow inside the concrete cask is driven by natural convection. Heat transfer in the cask is a complicated process because of the inherent complexity of the geometry and the fixed and natural convection induced by the radioactive decay process. Other factors that contribute to the overall heat transfer include the heat generation by the spent fuel, the thermal boundary condition, the filling medium within the MSB, and the vertical or horizontal orientation of the cask. Proper thermal analysis of dry storage casks is important for accurate estimation of the peak fuel temperature and peak cladding temperature (PCT). Proper estimation of PCT ensures the integrity of cladding and is important for safety evaluation of independent spent fuel storage installations. Accurate estimation of the peak fuel temperature and peak cladding temperature ensures the integrity of the cladding. The spent nuclear fuel may be exposed to air and oxidize if the cladding is damaged and thus increase the potential for release of radioactivity. In the current analysis, numerical simulations are carried out using the computational fluid

  14. Charges at aqueous interfaces: Development of computational approaches in direct contact with experiment

    Vácha, R.; Uhlig, Frank; Jungwirth, Pavel

    Vol. 155. Hoboken : Wiley, 2014 - (Rice, S.; Dinner, A.), s. 69-95 ISBN 978-1-118-75577-8 Institutional support: RVO:61388963 Keywords : aqueous interfaces * computational approaches * electronic structure approach * ionic charges * polarization Subject RIV: CF - Physical ; Theoretical Chemistry

  15. A New Approach to Practical Active-Secure Two-Party Computation

    Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio; Burra, Sai Sheshank

    2011-01-01

    We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao's garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce a...

  16. A New Approach to Practical Active-Secure Two-Party Computation

    Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio; Burra, Sai Sheshank

    2012-01-01

    We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao’s garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce a...

  17. Computational model of precision grip in Parkinson’s disease: A Utility based approach

    Ankur eGupta

    2013-12-01

    Full Text Available We propose a computational model of Precision Grip (PG performance in normal subjects and Parkinson’s Disease (PD patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Fellows et al 1998; Ingvarsson et al 1997. Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: 1 the sensory-motor loop component, and 2 the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the precision grip results from normal and PD patients accurately (Fellows et. al. 1998; Ingvarsson et. al. 1997. To our knowledge the model is the first model of precision grip in PD conditions.

  18. Computational model of precision grip in Parkinson's disease: a utility based approach.

    Gupta, Ankur; Balasubramani, Pragathi P; Chakravarthy, V Srinivasa

    2013-01-01

    We propose a computational model of Precision Grip (PG) performance in normal subjects and Parkinson's Disease (PD) patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Ingvarsson et al., 1997; Fellows et al., 1998). Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: (1) the sensory-motor loop component, and (2) the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the PG results from normal and PD patients accurately (Ingvarsson et al., 1997; Fellows et al., 1998). To our knowledge the model is the first model of PG in PD conditions. PMID:24348373

  19. An Augmented Incomplete Factorization Approach for Computing the Schur Complement in Stochastic Optimization

    Petra, Cosmin G.

    2014-01-01

    We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.

  20. Efficient all-optical quantum computing based on a hybrid approach

    Lee, Seung-Woo

    2011-01-01

    Quantum computers are expected to offer phenomenal increases of computational power. In spite of many proposals based on various physical systems, scalable quantum computation in a fault-tolerant manner is still beyond current technology. Optical models have some prominent advantages such as relatively quick operation time compared to decoherence time. However, massive resource requirements and the gap between the fault tolerance limit and the realistic error rate should be significantly reduced. Here, we develop a novel approach with all-optical hybrid qubits devised to combine advantages of well-known previous approaches. It enables one to efficiently perform universal gate operations in a simple and near-deterministic way using all-optical hybrid entanglement as off-line resources. Remarkably, our approach outperforms the previous ones when considering both the resource requirements and fault tolerance limits. Our work paves an efficient way for the optical realization of scalable quantum computation.

  1. A new computer approach to mixed feature classification for forestry application

    Kan, E. P.

    1976-01-01

    A computer approach for mapping mixed forest features (i.e., types, classes) from computer classification maps is discussed. Mixed features such as mixed softwood/hardwood stands are treated as admixtures of softwood and hardwood areas. Large-area mixed features are identified and small-area features neglected when the nominal size of a mixed feature can be specified. The computer program merges small isolated areas into surrounding areas by the iterative manipulation of the postprocessing algorithm that eliminates small connected sets. For a forestry application, computer-classified LANDSAT multispectral scanner data of the Sam Houston National Forest were used to demonstrate the proposed approach. The technique was successful in cleaning the salt-and-pepper appearance of multiclass classification maps and in mapping admixtures of softwood areas and hardwood areas. However, the computer-mapped mixed areas matched very poorly with the ground truth because of inadequate resolution and inappropriate definition of mixed features.

  2. A scalable approach to modeling groundwater flow on massively parallel computers

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  3. Discovery Learning and the Computational Experiment in Higher Mathematics and Science Education: A Combined Approach

    Athanasios Kyriazis

    2009-12-01

    Full Text Available In this article we present our research for Discovery learning in relation to the computational experiment for the instruction of Mathematics and Science university courses, using the approach of the computational experiment through electronic worksheets. The approach is based on the principles of Discovery learning expanded with the principles of constructivist, socio–cultural and adult learning theories, the concept of computer based cognitive tools and the aspects on which the computational experiment is founded. Applications are presented using the software Mathematica and electronic worksheets for selected domains of Physics. We also present a case study, concerning the application of the computational experiment through electronic worksheets in the School of Pedagogical and Technological Education (ASPETE during the spring semester of the academic year 2008–2009. Research results concerning the impact of the above mentioned issues on students’ beliefs and learning performance are presented.

  4. Teleportation-based quantum computation, extended Temperley-Lieb diagrammatical approach and Yang-Baxter equation

    Zhang, Yong; Zhang, Kun; Pang, Jinglong

    2016-01-01

    This paper focuses on the study of topological features in teleportation-based quantum computation and aims at presenting a detailed review on teleportation-based quantum computation (Gottesman and Chuang in Nature 402: 390, 1999). In the extended Temperley-Lieb diagrammatical approach, we clearly show that such topological features bring about the fault-tolerant construction of both universal quantum gates and four-partite entangled states more intuitive and simpler. Furthermore, we describe the Yang-Baxter gate by its extended Temperley-Lieb configuration and then study teleportation-based quantum circuit models using the Yang-Baxter gate. Moreover, we discuss the relationship between the extended Temperley-Lieb diagrammatical approach and the Yang-Baxter gate approach. With these research results, we propose a worthwhile subject, the extended Temperley-Lieb diagrammatical approach, for physicists in quantum information and quantum computation.

  5. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications

    Merced-Grafals, Emmanuelle J.; Dávila, Noraica; Ge, Ning; Williams, R. Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 106 cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  6. Automated Extraction of Cranial Landmarks from Computed Tomography Data using a Combined Method of Knowledge and Pattern Based Approaches

    Roshan N. RAJAPAKSE

    2016-03-01

    Full Text Available Accurate identification of anatomical structures from medical imaging data is a significant and critical function in the medical domain. Past studies in this context have mainly utilized two main approaches, the knowledge and learning methodologies based methods. Further, most of previous reported studies have focused on identification of landmarks from lateral X-ray Computed Tomography (CT data, particularly in the field of orthodontics. However, this study focused on extracting cranial landmarks from large sets of cross sectional CT slices using a combined method of the two aforementioned approaches. The proposed method of this study is centered mainly on template data sets, which were created using the actual contour patterns extracted from CT cases for each of the landmarks in consideration. Firstly, these templates were used to devise rules which are a characteristic of the knowledge based method. Secondly, the same template sets were employed to perform template matching related to the learning methodologies approach. The proposed method was tested on two landmarks, the Dorsum sellae and the Pterygoid plate, using CT cases of 5 subjects. The results indicate that, out of the 10 tests, the output images were within the expected range (desired accuracy in 7 instances and acceptable range (near accuracy for 2 instances, thus verifying the effectiveness of the combined template sets centric approach proposed in this study.

  7. Fully Integrated Approach to Compute Vibrationally Resolved Optical Spectra: From Small Molecules to Macrosystems.

    Barone, Vincenzo; Bloino, Julien; Biczysko, Malgorzata; Santoro, Fabrizio

    2009-03-10

    A general and effective time-independent approach to compute vibrationally resolved electronic spectra from first principles has been integrated into the Gaussian computational chemistry package. This computational tool offers a simple and easy-to-use way to compute theoretical spectra starting from geometry optimization and frequency calculations for each electronic state. It is shown that in such a way it is straightforward to combine calculation of Franck-Condon integrals with any electronic computational model. The given examples illustrate the calculation of absorption and emission spectra, all in the UV-vis region, of various systems from small molecules to large ones, in gas as well as in condensed phases. The computational models applied range from fully quantum mechanical descriptions to discrete/continuum quantum mechanical/molecular mechanical/polarizable continuum models. PMID:26610221

  8. Medical imaging in clinical applications algorithmic and computer-based approaches

    Bhateja, Vikrant; Hassanien, Aboul

    2016-01-01

    This volume comprises of 21 selected chapters, including two overview chapters devoted to abdominal imaging in clinical applications supported computer aided diagnosis approaches as well as different techniques for solving the pectoral muscle extraction problem in the preprocessing part of the CAD systems for detecting breast cancer in its early stage using digital mammograms. The aim of this book is to stimulate further research in medical imaging applications based algorithmic and computer based approaches and utilize them in real-world clinical applications. The book is divided into four parts, Part-I: Clinical Applications of Medical Imaging, Part-II: Classification and clustering, Part-III: Computer Aided Diagnosis (CAD) Tools and Case Studies and Part-IV: Bio-inspiring based Computer Aided diagnosis techniques. .

  9. Smoothing spline analysis of variance approach for global sensitivity analysis of computer codes

    The paper investigates a nonparametric regression method based on smoothing spline analysis of variance (ANOVA) approach to address the problem of global sensitivity analysis (GSA) of complex and computationally demanding computer codes. The two steps algorithm of this method involves an estimation procedure and a variable selection. The latter can become computationally demanding when dealing with high dimensional problems. Thus, we proposed a new algorithm based on Landweber iterations. Using the fact that the considered regression method is based on ANOVA decomposition, we introduced a new direct method for computing sensitivity indices. Numerical tests performed on several analytical examples and on an application from petroleum reservoir engineering showed that the method gives competitive results compared to a more standard Gaussian process approach

  10. Hybrid Intelligent Routing in Wireless Mesh Networks: Soft Computing Based Approaches

    Sharad Sharma

    2013-12-01

    Full Text Available Wireless Mesh Networks (WMNs are the evolutionary self-organizing multi-hop wireless networks to promise last mile access. Due to the emergence of stochastically varying network environments, routing in WMNs is critically affected. In this paper, we first propose a fuzzy logic based hybrid performance metric comprising of link and node parameters. This Integrated Link Cost (ILC is computed for each link based upon throughput, delay, jitter of the link and residual energy of the node and is used to compute shortest path between a given source-terminal node pair. Further to address the optimal routing path selection, two soft computing based approaches are proposed and analyzed along with a conventional approach. Extensive simulations are performed for various architectures of WMNs with varying network conditions. It was observed that the proposed approaches are far superior in dealing with dynamic nature of WMNs as compared to Adhoc On-demand Distance Vector (AODV algorithm.

  11. Mathematics of shape description a morphological approach to image processing and computer graphics

    Ghosh, Pijush K

    2009-01-01

    Image processing problems are often not well defined because real images are contaminated with noise and other uncertain factors. In Mathematics of Shape Description, the authors take a mathematical approach to address these problems using the morphological and set-theoretic approach to image processing and computer graphics by presenting a simple shape model using two basic shape operators called Minkowski addition and decomposition. This book is ideal for professional researchers and engineers in Information Processing, Image Measurement, Shape Description, Shape Representation and Computer Graphics. Post-graduate and advanced undergraduate students in pure and applied mathematics, computer sciences, robotics and engineering will also benefit from this book.  Key FeaturesExplains the fundamental and advanced relationships between algebraic system and shape description through the set-theoretic approachPromotes interaction of image processing geochronology and mathematics in the field of algebraic geometryP...

  12. Computer simulation: A new scientific approach to the study of language evolution

    Cangelosi, Angelo; Parisi, Domenico

    2002-01-01

    (summary of the whole book) This volume provides a comprehensive survey of computational models and methodologies used for studying the origin and evolution of language and communication. With contributions from the most influential figures in the field, Simulating the Evolution of Language presents and summarises current computational approaches to language evolution and highlights new lines of development. Among the main discussion points are: · Analysis of emerging linguisti...

  13. Non-invasive computation of aortic pressure maps: a phantom-based study of two approaches

    Delles, Michael; Schalck, Sebastian; Chassein, Yves; Müller, Tobias; Rengier, Fabian; Speidel, Stefanie; von Tengg-Kobligk, Hendrik; Kauczor, Hans-Ulrich; Dillmann, Rüdiger; Unterhinninghofen, Roland

    2014-03-01

    Patient-specific blood pressure values in the human aorta are an important parameter in the management of cardiovascular diseases. A direct measurement of these values is only possible by invasive catheterization at a limited number of measurement sites. To overcome these drawbacks, two non-invasive approaches of computing patient-specific relative aortic blood pressure maps throughout the entire aortic vessel volume are investigated by our group. The first approach uses computations from complete time-resolved, three-dimensional flow velocity fields acquired by phasecontrast magnetic resonance imaging (PC-MRI), whereas the second approach relies on computational fluid dynamics (CFD) simulations with ultrasound-based boundary conditions. A detailed evaluation of these computational methods under realistic conditions is necessary in order to investigate their overall robustness and accuracy as well as their sensitivity to certain algorithmic parameters. We present a comparative study of the two blood pressure computation methods in an experimental phantom setup, which mimics a simplified thoracic aorta. The comparative analysis includes the investigation of the impact of algorithmic parameters on the MRI-based blood pressure computation and the impact of extracting pressure maps in a voxel grid from the CFD simulations. Overall, a very good agreement between the results of the two computational approaches can be observed despite the fact that both methods used completely separate measurements as input data. Therefore, the comparative study of the presented work indicates that both non-invasive pressure computation methods show an excellent robustness and accuracy and can therefore be used for research purposes in the management of cardiovascular diseases.

  14. Parallel kd-Tree Based Approach for Computing the Prediction Horizon Using Wolf’s Method

    Águila, J. J.; Arias, E.; Artigao, M. M.; Miralles, J.J.

    2015-01-01

    In different fields of science and engineering, a model of a given underlying dynamical system can be obtained by means of measurement data records called time series. This model becomes very important to understand the original system behaviour and to predict the future values of that system. From the model, parameters such as the prediction horizon can be computed to obtain the point where the prediction becomes useless. In this work, a new parallel kd-tree based approach for computing the ...

  15. Approaches to Transient Computing for Energy Harvesting Systems: A Quantitative Evaluation

    Rodriguez, Alberto; Balsamo , Domenico; Das(2), Anup; Weddell, Alex S.; Brunelli, Davide; Al-Hashimi, Bashir; Merrett, Geoff V

    2015-01-01

    Systems operating from harvested sources typically integrate batteries or supercapacitors to smooth out rapid changes in harvester output. However, such energy storage devices require time for charging and increase the size, mass and cost of the system. A recent approach to address this is to power systems directly from the harvester output, termed transient computing. To solve the problem of having to restart computation from the start due to power-cycles, a number of techniques have been pr...

  16. Computer based virtual reality approach towards its application in an accidental emergency at nuclear power plant

    Virtual reality is a computer based system for creating and receiving virtual world. As an emerging branch of computer discipline, this approach is extensively expanding and widely used in variety of industries such as national defence, research, engineering, medicine and air navigation. The author intends to present the fundamentals of virtual reality, in attempt to study some interested aspects for use in nuclear power emergency planning

  17. Computer-based diagnostic and prognostic approaches in medical research using brain MRI

    Weygandt, Martin

    2016-01-01

    Die vorliegende Habilitationsschrift zu „Computer-based diagnostic and prognostic approaches in medical research using brain MRI“ ist in zwei Abschnitte gegliedert. Konkret wird im ersten Abschnitt eine Übersicht über verschiedene Aspekte des Computer- und MRT-basierten Vorhersageansatzes gegeben. Im zweiten Abschnitt werden die Artikel aus diesem Feld beschrieben, die ich für die Habilitation eingereicht habe. Konkret beginnt der erste Abschnitt der Habilitationsschrift damit, das grundlege...

  18. Portable Parallel CORBA Objects: an Approach to Combine Parallel and Distributed Programming for Grid Computing

    Denis, Alexandre; Pérez, Christian; Priol, Thierry

    2001-01-01

    With the availability of Computational Grids, new kinds of applications that will soon emerge will raise the problem of how to program them on such computing systems. In this paper, we advocate a programming model that is based on a combination of parallel and distributed programming models. Compared to previous approaches, this work aims at bringing SPMD programming into CORBA. For example, we want to interconnect two MPI codes by CORBA without modifying MPI or CORBA. We show that such an ap...

  19. A Computational Approach to Essential and Nonessential Objective Functions in Linear Multicriteria Optimization

    Malinowska, Agnieszka B

    2007-01-01

    The question of obtaining well-defined criteria for multiple criteria decision making problems is well-known. One of the approaches dealing with this question is the concept of nonessential objective function. A certain objective function is called nonessential if the set of efficient solutions is the same both with or without that objective function. In this paper we put together two methods for determining nonessential objective functions. A computational implementation is done using a computer algebra system.

  20. A Multi-Step and Multi-Level Approach for Computer Aided Molecular Design

    Harper, Peter Mathias; Gani, Rafiqul

    . The problem formulation step incorporates a knowledge base for the identification and setup of the design criteria. Candidate compounds are identified using a multi-level generate and test CAMD solution algorithm capable of designing molecules having a high level of molecular detail. A post solution......A general multi-step approach for setting up, solving and solution analysis of computer aided molecular design (CAMD) problems is presented. The approach differs from previous work within the field of CAMD since it also addresses the need for a computer aided problem formulation and result analysis...

  1. A Two Layer Approach to the Computability and Complexity of Real Functions

    Lambov, Branimir Zdravkov

    characterizing functions. A similar discrimination is implemented in the presented real number package, which operates on both an approximation layer and a layer which operates on real numbers as complete entities. This approach allows the model to provide correctness and completeness to the established notions......We present a new model for computability and complexity of real functions together with an implementation that it based on it. The model uses a two-layer approach in which low-type basic objects perform the computation of a real function, but, whenever needed, can be complemented with higher type...

  2. A Crisis Management Approach To Mission Survivability In Computational Multi-Agent Systems

    Aleksander Byrski

    2010-01-01

    Full Text Available In this paper we present a biologically-inspired approach for mission survivability (consideredas the capability of fulfilling a task such as computation that allows the system to be aware ofthe possible threats or crises that may arise. This approach uses the notion of resources usedby living organisms to control their populations.We present the concept of energetic selectionin agent-based evolutionary systems as well as the means to manipulate the configuration ofthe computation according to the crises or user’s specific demands.

  3. A Workflow-Forecast Approach To The Task Scheduling Problem In Distributed Computing Systems

    Gritsenko, Andrey

    2013-01-01

    The aim of this paper is to provide a description of deep-learning-based scheduling approach for academic-purpose high-performance computing systems. The share of academic-purpose distributed computing systems (DCS) reaches 17.4 percents amongst TOP500 supercomputer sites (15.6 percents in performance scale) that makes them a valuable object of research. The core of this approach is to predict the future workflow of the system depending on the previously submitted tasks using deep learning al...

  4. A Multi-Step and Multi-Level Approach for Computer Aided Molecular Design

    Harper, Peter Mathias; Gani, Rafiqul

    A general multi-step approach for setting up, solving and solution analysis of computer aided molecular design (CAMD) problems is presented. The approach differs from previous work within the field of CAMD since it also addresses the need for a computer aided problem formulation and result analysis....... The problem formulation step incorporates a knowledge base for the identification and setup of the design criteria. Candidate compounds are identified using a multi-level generate and test CAMD solution algorithm capable of designing molecules having a high level of molecular detail. A post solution...... step for result analysis and verification is included in the methodology. (C) 2000 Elsevier Science Ltd. All rights reserved....

  5. A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises

    Kuada, Eric; Olesen, Henning

    2011-01-01

    This paper proposes a social network approach to the provisioning and management of cloud computing services termed Opportunistic Cloud Computing Services (OCCS), for enterprises; and presents the research issues that need to be addressed for its implementation. We hypothesise that OCCS will...... facilitate the adoption process of cloud computing services by enterprises. OCCS deals with the concept of enterprises taking advantage of cloud computing services to meet their business needs without having to pay or paying a minimal fee for the services. The OCCS network will be modelled and implemented as...... a social network of enterprises collaborating strategically for the provisioning and consumption of cloud computing services without entering into any business agreements. We conclude that it is possible to configure current cloud service technologies and management tools for OCCS but there is a...

  6. A Computer-Aided FPS-Oriented Approach for Construction Briefing

    Xiaochun Luo; Qiping Shen

    2008-01-01

    Function performance specification (FPS) is one of the value management (VM) techniques de- veloped for the explicit statement of optimum product definition. This technique is widely used in software engineering and manufacturing industry, and proved to be successful to perform product defining tasks. This paper describes an FPS-odented approach for construction briefing, which is critical to the successful deliv- ery of construction projects. Three techniques, i.e., function analysis system technique, shared space, and computer-aided toolkit, are incorporated into the proposed approach. A computer-aided toolkit is developed to facilitate the implementation of FPS in the briefing processes. This approach can facilitate systematic, ef- ficient identification, clarification, and representation of client requirements in trail running. The limitations of the approach and future research work are also discussed at the end of the paper.

  7. Objective Definition of Rosette Shape Variation Using a Combined Computer Vision and Data Mining Approach

    Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H.; Gay, Alan P.

    2014-01-01

    Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflex...

  8. Artificial intelligence and tutoring systems computational and cognitive approaches to the communication of knowledge

    Wenger, Etienne

    2014-01-01

    Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge focuses on the cognitive approaches, methodologies, principles, and concepts involved in the communication of knowledge. The publication first elaborates on knowledge communication systems, basic issues, and tutorial dialogues. Concerns cover natural reasoning and tutorial dialogues, shift from local strategies to multiple mental models, domain knowledge, pedagogical knowledge, implicit versus explicit encoding of knowledge, knowledge communication, and practical and theoretic

  9. Intraoperative Computed Tomography Guidance to Confirm Decompression Following Endoscopic Endonasal Approach for Cervicomedullary Compression

    Gande, Abhiram; Tormenti, Matthew J.; Koutourousiou, Maria; Paluzzi, Alessandro; Fernendez-Miranda, Juan C.; Snydermnan, Carl H.; Gardner, Paul A.

    2013-01-01

    Introduction Cervicomedullary compression often requires an anterior approach to address the compressive vector. In certain cases an endoscopic endonasal approach (EEA) is ideal for decompression. It is essential that an adequate decompression be achieved and verified before the patient leaves the operating room. The purpose of this study was to evaluate the use intraoperative computed tomography (IO-CT) in assessing the adequacy of decompression.

  10. A COMPUTER-AIDED UNIFIED APPROACH TO MODELING PWM AND RESONANT CONVERTERS

    2005-01-01

    This letter puts forward a method of modeling for the steady-state and small signal dynamic analysis on PWM, quasi-resonant and series/(parallel) resonant switching converters based on pulse-waveform integral approach. As an example, PWM and quasi-resonant converters are used to discuss the principle of the approach. The results are compared with those in the relative literatures. Computer aided analysis are made to confirm the correctness.

  11. Computer Networks Security Models - A New Approach for Denial-of-Services Attacks Mitigation

    Tsvetanov, Tsvetomir

    2010-01-01

    Computer networks are a critical factor for the performance of a modern company. Managing networks is as important as managing any other aspect of the company’s performance and security. There are many tools and appliances for monitoring the traffic and analyzing the network flow security. They use different approaches and rely on a variety of characteristics of the network flows. Network researchers are still working on a common approach for security baselining that might e...

  12. Simple and accurate scheme to compute electrostatic interaction: zero-dipole summation technique for molecular system and application to bulk water.

    Fukuda, Ikuo; Kamiya, Narutoshi; Yonezawa, Yasushige; Nakamura, Haruki

    2012-08-01

    The zero-dipole summation method was extended to general molecular systems, and then applied to molecular dynamics simulations of an isotropic water system. In our previous paper [I. Fukuda, Y. Yonezawa, and H. Nakamura, J. Chem. Phys. 134, 164107 (2011)], for evaluating the electrostatic energy of a classical particle system, we proposed the zero-dipole summation method, which conceptually prevents the nonzero-charge and nonzero-dipole states artificially generated by a simple cutoff truncation. Here, we consider the application of this scheme to molecular systems, as well as some fundamental aspects of general cutoff truncation protocols. Introducing an idea to harmonize the bonding interactions and the electrostatic interactions in the scheme, we develop a specific algorithm. As in the previous study, the resulting energy formula is represented by a simple pairwise function sum, enabling facile applications to high-performance computation. The accuracy of the electrostatic energies calculated by the zero-dipole summation method with the atom-based cutoff was numerically investigated, by comparison with those generated by the Ewald method. We obtained an electrostatic energy error of less than 0.01% at a cutoff length longer than 13 Å for a TIP3P isotropic water system, and the errors were quite small, as compared to those obtained by conventional truncation methods. The static property and the stability in an MD simulation were also satisfactory. In addition, the dielectric constants and the distance-dependent Kirkwood factors were measured, and their coincidences with those calculated by the particle mesh Ewald method were confirmed, although such coincidences are not easily attained by truncation methods. We found that the zero damping-factor gave the best results in a practical cutoff distance region. In fact, in contrast to the zero-charge scheme, the damping effect was insensitive in the zero-charge and zero-dipole scheme, in the molecular system we

  13. Numerical characterization of nonlinear dynamical systems using parallel computing: The role of GPUS approach

    Fazanaro, Filipe I.; Soriano, Diogo C.; Suyama, Ricardo; Madrid, Marconi K.; Oliveira, José Raimundo de; Muñoz, Ignacio Bravo; Attux, Romis

    2016-08-01

    The characterization of nonlinear dynamical systems and their attractors in terms of invariant measures, basins of attractions and the structure of their vector fields usually outlines a task strongly related to the underlying computational cost. In this work, the practical aspects related to the use of parallel computing - specially the use of Graphics Processing Units (GPUS) and of the Compute Unified Device Architecture (CUDA) - are reviewed and discussed in the context of nonlinear dynamical systems characterization. In this work such characterization is performed by obtaining both local and global Lyapunov exponents for the classical forced Duffing oscillator. The local divergence measure was employed by the computation of the Lagrangian Coherent Structures (LCSS), revealing the general organization of the flow according to the obtained separatrices, while the global Lyapunov exponents were used to characterize the attractors obtained under one or more bifurcation parameters. These simulation sets also illustrate the required computation time and speedup gains provided by different parallel computing strategies, justifying the employment and the relevance of GPUS and CUDA in such extensive numerical approach. Finally, more than simply providing an overview supported by a representative set of simulations, this work also aims to be a unified introduction to the use of the mentioned parallel computing tools in the context of nonlinear dynamical systems, providing codes and examples to be executed in MATLAB and using the CUDA environment, something that is usually fragmented in different scientific communities and restricted to specialists on parallel computing strategies.

  14. Learning Probabilities in Computer Engineering by Using a Competency- and Problem-Based Approach

    Khoumsi, Ahmed; Hadjou, Brahim

    2005-01-01

    Our department has redesigned its electrical and computer engineering programs by adopting a learning methodology based on competence development, problem solving, and the realization of design projects. In this article, we show how this pedagogical approach has been successfully used for learning probabilities and their application to computer…

  15. Computer simulation of HTGR fuel microspheres using a Monte-Carlo statistical approach

    The concept and computational aspects of a Monte-Carlo statistical approach in relating structure of HTGR fuel microspheres to the uranium content of fuel samples have been verified. Results of the preliminary validation tests and the benefits to be derived from the program are summarized

  16. Toward Accurate and Quantitative Comparative Metagenomics.

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  17. Data Provenance and Management in Radio Astronomy: A Stream Computing Approach

    Mahmoud, Mahmoud S; Biem, Alain; Elmegreen, Bruce; Gulyaev, Sergei

    2011-01-01

    New approaches for data provenance and data management (DPDM) are required for mega science projects like the Square Kilometer Array, characterized by extremely large data volume and intense data rates, therefore demanding innovative and highly efficient computational paradigms. In this context, we explore a stream-computing approach with the emphasis on the use of accelerators. In particular, we make use of a new generation of high performance stream-based parallelization middleware known as InfoSphere Streams. Its viability for managing and ensuring interoperability and integrity of signal processing data pipelines is demonstrated in radio astronomy. IBM InfoSphere Streams embraces the stream-computing paradigm. It is a shift from conventional data mining techniques (involving analysis of existing data from databases) towards real-time analytic processing. We discuss using InfoSphere Streams for effective DPDM in radio astronomy and propose a way in which InfoSphere Streams can be utilized for large antenna...

  18. A Multi-step and Multi-level approach for Computer Aided Molecular Design

    A general multi-step approach for setting up, solving and solution analysis of computer aided molecular design (CAMD) problems is presented. The approach differs from previous work within the field of CAMD since it also addresses the need for a computer aided problem formulation and result analysis....... The problem formulation step incorporates a knowledge base for the identification and setup of the design criteria. Candidate compounds are identified using a multi-level generate and test CAMD solution algorithm capable of designing molecules having a high level of molecular detail. A post solution...... step using an Integrated Computer Aided System (ICAS) for result analysis and verification is included in the methodology. Keywords: CAMD, separation processes, knowledge base, molecular design, solvent selection, substitution, group contribution, property prediction, ICAS Introduction The use of...

  19. Accurate basis set truncation for wavefunction embedding

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  20. Digital approach to planning computer-guided surgery and immediate provisionalization in a partially edentulous patient.

    Arunyanak, Sirikarn P; Harris, Bryan T; Grant, Gerald T; Morton, Dean; Lin, Wei-Shao

    2016-07-01

    This report describes a digital approach for computer-guided surgery and immediate provisionalization in a partially edentulous patient. With diagnostic data obtained from cone-beam computed tomography and intraoral digital diagnostic scans, a digital pathway of virtual diagnostic waxing, a virtual prosthetically driven surgical plan, a computer-aided design and computer-aided manufacturing (CAD/CAM) surgical template, and implant-supported screw-retained interim restorations were realized with various open-architecture CAD/CAM systems. The optional CAD/CAM diagnostic casts with planned implant placement were also additively manufactured to facilitate preoperative inspection of the surgical template and customization of the CAD/CAM-fabricated interim restorations. PMID:26868961

  1. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  2. An analytical approach to photonic reservoir computing - a network of SOA's - for noisy speech recognition

    Salehi, Mohammad Reza; Abiri, Ebrahim; Dehyadegari, Louiza

    2013-10-01

    This paper seeks to investigate an approach of photonic reservoir computing for optical speech recognition on an examination isolated digit recognition task. An analytical approach in photonic reservoir computing is further drawn on to decrease time consumption, compared to numerical methods; which is very important in processing large signals such as speech recognition. It is also observed that adjusting reservoir parameters along with a good nonlinear mapping of the input signal into the reservoir, analytical approach, would boost recognition accuracy performance. Perfect recognition accuracy (i.e. 100%) can be achieved for noiseless speech signals. For noisy signals with 0-10 db of signal to noise ratios, however, the accuracy ranges observed varied between 92% and 98%. In fact, photonic reservoir application demonstrated 9-18% improvement compared to classical reservoir networks with hyperbolic tangent nodes.

  3. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  4. Accurate Development of Thermal Neutron Scattering Cross Section Libraries

    Hawari, Ayman; Dunn, Michael

    2014-06-10

    The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.

  5. Hybrid approach for fast occlusion processing in computer-generated hologram calculation.

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2016-07-10

    A hybrid approach for fast occlusion processing in computer-generated hologram calculation is studied in this paper. The proposed method is based on the combination of two commonly used approaches that complement one another: the point-source and wave-field approaches. By using these two approaches together, the proposed method thus takes advantage of both of them. In this method, the 3D scene is first sliced into several depth layers parallel to the hologram plane. Light scattered by the scene is then propagated and shielded from one layer to another using either a point-source or a wave-field approach according to a threshold criterion on the number of points within the layer. Finally, the hologram is obtained by computing the propagation of light from the nearest layer to the hologram plane. Experimental results reveal that the proposed method does not produce any visible artifact and outperforms both the point-source and wave-field approaches. PMID:27409327

  6. An Approach to Computer Modeling of Geological Faults in 3D and an Application

    ZHU Liang-feng; HE Zheng; PAN Xin; WU Xin-cai

    2006-01-01

    3D geological modeling, one of the most important applications in geosciences of 3D GIS, forms the basis and is a prerequisite for visualized representation and analysis of 3D geological data. Computer modeling of geological faults in 3D is currently a topical research area. Structural modeling techniques of complex geological entities containing reverse faults are discussed and a series of approaches are proposed. The geological concepts involved in computer modeling and visualization of geological fault in 3D are explained, the type of data of geological faults based on geological exploration is analyzed, and a normative database format for geological faults is designed. Two kinds of modeling approaches for faults are compared: a modeling technique of faults based on stratum recovery and a modeling technique of faults based on interpolation in subareas. A novel approach, called the Unified Modeling Technique for stratum and fault, is presented to solve the puzzling problems of reverse faults, syn-sedimentary faults and faults terminated within geological models. A case study of a fault model of bed rock in the Beijing Olympic Green District is presented in order to show the practical result of this method. The principle and the process of computer modeling of geological faults in 3D are discussed and a series of applied technical proposals established. It strengthens our profound comprehension of geological phenomena and the modeling approach, and establishes the basic techniques of 3D geological modeling for practical applications in the field of geosciences.

  7. An Approach for Indoor Path Computation among Obstacles that Considers User Dimension

    Liu Liu

    2015-12-01

    Full Text Available People often transport objects within indoor environments, who need enough space for the motion. In such cases, the accessibility of indoor spaces relies on the dimensions, which includes a person and her/his operated objects. This paper proposes a new approach to avoid obstacles and compute indoor paths with respect to the user dimension. The approach excludes inaccessible spaces for a user in five steps: (1 compute the minimum distance between obstacles and find the inaccessible gaps; (2 group obstacles according to the inaccessible gaps; (3 identify groups of obstacles that influence the path between two locations; (4 compute boundaries for the selected groups; and (5 build a network in the accessible area around the obstacles in the room. Compared to the Minkowski sum method for outlining inaccessible spaces, the proposed approach generates simpler polygons for groups of obstacles that do not contain inner rings. The creation of a navigation network becomes easier based on these simple polygons. By using this approach, we can create user- and task-specific networks in advance. Alternatively, the accessible path can be generated on the fly before the user enters a room.

  8. Computational intelligence approach for NOx emissions minimization in a coal-fired utility boiler

    The current work presented a computational intelligence approach used for minimizing NOx emissions in a 300 MW dual-furnaces coal-fired utility boiler. The fundamental idea behind this work included NOx emissions characteristics modeling and NOx emissions optimization. First, an objective function aiming at estimating NOx emissions characteristics from nineteen operating parameters of the studied boiler was represented by a support vector regression (SVR) model. Second, four levels of primary air velocities (PA) and six levels of secondary air velocities (SA) were regulated by using particle swarm optimization (PSO) so as to achieve low NOx emissions combustion. To reduce the time demanding, a more flexible stopping condition was used to improve the computational efficiency without the loss of the quality of the optimization results. The results showed that the proposed approach provided an effective way to reduce NOx emissions from 399.7 ppm to 269.3 ppm, which was much better than a genetic algorithm (GA) based method and was slightly better than an ant colony optimization (ACO) based approach reported in the earlier work. The main advantage of PSO was that the computational cost, typical of less than 25 s under a PC system, is much less than those required for ACO. This meant the proposed approach would be more applicable to online and real-time applications for NOx emissions minimization in actual power plant boilers.

  9. A Hybrid Computational Intelligence Approach Combining Genetic Programming And Heuristic Classification for Pap-Smear Diagnosis

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan;

    2001-01-01

    The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come for...... diagnoses. The final result is a short but robust rule based classification scheme, achieving high degree of classification accuracy (exceeding 90% of accuracy for most classes) in a meaningful and user-friendly representation form for the medical expert. The domain of application analyzed through the paper...

  10. An integrative computational approach to effectively guide experimental identification of regulatory elements in promoters

    Deyneko Igor V

    2012-08-01

    Full Text Available Abstract Background Transcriptional activity of genes depends on many factors like DNA motifs, conformational characteristics of DNA, melting etc. and there are computational approaches for their identification. However, in real applications, the number of predicted, for example, DNA motifs may be considerably large. In cases when various computational programs are applied, systematic experimental knock out of each of the potential elements obviously becomes nonproductive. Hence, one needs an approach that is able to integrate many heterogeneous computational methods and upon that suggest selected regulatory elements for experimental verification. Results Here, we present an integrative bioinformatic approach aimed at the discovery of regulatory modules that can be effectively verified experimentally. It is based on combinatorial analysis of known and novel binding motifs, as well as of any other known features of promoters. The goal of this method is the identification of a collection of modules that are specific for an established dataset and at the same time are optimal for experimental verification. The method is particularly effective on small datasets, where most statistical approaches fail. We apply it to promoters that drive tumor-specific gene expression in tumor-colonizing Gram-negative bacteria. The method successfully identified a number of potential modules, which required only a few experiments to be verified. The resulting minimal functional bacterial promoter exhibited high specificity of expression in cancerous tissue. Conclusions Experimental analysis of promoter structures guided by bioinformatics has proved to be efficient. The developed computational method is able to include heterogeneous features of promoters and suggest combinatorial modules for experimental testing. Expansibility and robustness of the methodology implemented in the approach ensures good results for a wide range of problems.

  11. Radiological management of blunt polytrauma with computed tomography and angiography: an integrated approach

    Kurdziel, J.C.; Dondelinger, R.F.; Hemmer, M.

    1987-01-01

    107 polytraumatized patients, who had experienced blunt trauma have been worked up at admission with computed tomography of the thorax, abdomen and pelvis following computed tomography study of the brain: significant lesions were revealed in 98 (90%) patients. 79 (74%) patients showed trauma to the thorax, in 69 (64%) patients abdominal or pelvic trauma was evidenced. No false positive diagnosis was established. 5 traumatic findings were missed. Emergency angiography was indicated in 3 (3%) patients, following computed tomography examination. 3 other trauma patients were submitted directly to angiography without computed tomography examination during the time period this study was completed. Embolization was carried out in 5/6 patients. No thoracotomy was needed. 13 (12%) patients underwent laparotomy following computed tomography. Overall mortality during hospital stay was 14% (15/107). No patient died from visceral bleeding. Conservative management of blunt polytrauma patients can be advocated in almost 90% of visceral lesions. Computed tomography coupled with angiography and embolization represent an adequate integrated approach to the management of blunt polytrauma patients.

  12. Radiological management of blunt polytrauma with computed tomography and angiography: an integrated approach

    107 polytraumatized patients, who had experienced blunt trauma have been worked up at admission with computed tomography of the thorax, abdomen and pelvis following computed tomography study of the brain: significant lesions were revealed in 98 (90%) patients. 79 (74%) patients showed trauma to the thorax, in 69 (64%) patients abdominal or pelvic trauma was evidenced. No false positive diagnosis was established. 5 traumatic findings were missed. Emergency angiography was indicated in 3 (3%) patients, following computed tomography examination. 3 other trauma patients were submitted directly to angiography without computed tomography examination during the time period this study was completed. Embolization was carried out in 5/6 patients. No thoracotomy was needed. 13 (12%) patients underwent laparotomy following computed tomography. Overall mortality during hospital stay was 14% (15/107). No patient died from visceral bleeding. Conservative management of blunt polytrauma patients can be advocated in almost 90% of visceral lesions. Computed tomography coupled with angiography and embolization represent an adequate integrated approach to the management of blunt polytrauma patients

  13. Dystrophic calcification in muscles of legs in calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasia syndrome: Accurate evaluation of the extent with 99mTc-methylene diphosphonate single photon emission computed tomography/computed tomography

    We present the case of a 35-year-old man with calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly and telangiectasia variant scleroderma who presented with dysphagia, Raynaud's phenomenon and calf pain. 99mTc-methylene diphosphonate bone scintigraphy was performed to identify the extent of the calcification. It revealed extensive dystrophic calcification in the left thigh and bilateral legs which was involving the muscles and was well-delineated on single photon emission computed tomography/computed tomography. Calcinosis in scleroderma usually involves the skin but can be found in deeper periarticular tissues. Myopathy is associated with a poor prognosis

  14. Seismic safety margins research program. Phase I. Project VII: systems analysis specifications of computational approach

    A computational methodology is presented for the prediction of core melt probabilities in a nuclear power plant due to earthquake events. The proposed model has four modules: seismic hazard, structural dynamic (including soil-structure interaction), component failure and core melt sequence. The proposed modules would operate in series and would not have to be operated at the same time. The basic statistical approach uses a Monte Carlo simulation to treat random and systematic error but alternate statistical approaches are permitted by the program design

  15. Approach and tool for computer animation of fields in electrical apparatus

    The paper presents a technical approach and post-processing tool for creating and displaying computer animation. The approach enables handling of two- and three-dimensional physical field phenomena results obtained from finite element software or to display movement processes in electrical apparatus simulations. The main goal of this work is to extend auxiliary features built in general-purpose CAD software working in the Windows environment. Different storage techniques were examined and the one employing image capturing was chosen. The developed tool provides benefits of independent visualisation, creating scenarios and facilities for exporting animations in common file fon-nats for distribution on different computer platforms. It also provides a valuable educational tool.(Author)

  16. A Review of Intrusion Detection Technique by Soft Computing and Data Mining Approach

    Aditya Shrivastava

    2013-09-01

    Full Text Available The growth of internet technology spread a large amount of data communication. The communication of data compromised network threats and security issues. The network threats and security issues raised a problem of data integrity and loss of data. For the purpose of data integrity and loss of data before 20 year Anderson developed a model of intrusion detection system. Initially intrusion detection system work on process of satirical frequency of audit system logs. Latter on this system improved by various researchers and apply some other approach such as data mining technique, neural network and expert system. Now in current research trend of intrusion detection system used soft computing approach such as fuzzy logic, genetic algorithm and machine learning. In this paper discuss some method of data mining and soft computing for the purpose of intrusion detection. Here used KDDCUP99 dataset used for performance evaluation for this technique.

  17. A Computational Approach for Analyzing and Detecting Emotions in Arabic Text

    Amira F. El Gohary, Torky I. Sultan, Maha A. Hana, Mohamed M. El Dosoky

    2013-05-01

    Full Text Available The field of Affective Computing (AC expects to narrow the communicative gap between the highly emotional human and the emotionally challenged computer by developing computational systems that recognize and respond to the affective states of the user. Affect-sensitive interfaces are being developed in number of domains, including gaming, mental health, and learning technologies. Emotions are part of human life. Recently, interest has been growing among researchers to find ways of detecting subjective information used in blogs and other online social media. This paper concerned with the automatic detection of emotions in Arabic text. This construction is based on a moderate sized Arabic emotion lexicon used to annotate Arabic children stories for the six basic emotions: Joy, Fear, Sadness, Anger, Disgust, and Surprise.Our approach achieves 65% accuracy for emotion detection in Arabic text.

  18. A direct approach to fault-tolerance in measurement-based quantum computation via teleportation

    We discuss a simple variant of the one-way quantum computing model (Raussendorf R and Briegel H-J 2001 Phys. Rev. Lett. 86 5188), called the Pauli measurement model, where measurements are restricted to be along the eigenbases of the Pauli X and Y operators, while qubits can be initially prepared both in the vertical bar +π/4> := 1/√2( vertical bar 0> + ei(π/4) vertical bar 1>) state and the usual vertical bar +> := 1/√2 ( vertical bar 0 > + vertical bar 1>) state. We prove the universality of this quantum computation model, and establish a standardization procedure which permits all entanglement and state preparation to be performed at the beginning of computation. This leads us to develop a direct approach to fault-tolerance by simple transformations of the entanglement graph and preparation operations, while error correction is performed naturally via syndrome-extracting teleportations

  19. A hybrid computational approach to estimate solar global radiation: An empirical evidence from Iran

    This paper presents an innovative hybrid approach for the estimation of the solar global radiation. New prediction equations were developed for the global radiation using an integrated search method of genetic programming (GP) and simulated annealing (SA), called GP/SA. The solar radiation was formulated in terms of several climatological and meteorological parameters. Comprehensive databases containing monthly data collected for 6 years in two cities of Iran were used to develop GP/SA-based models. Separate models were established for each city. The generalization of the models was verified using a separate testing database. A sensitivity analysis was conducted to investigate the contribution of the parameters affecting the solar radiation. The derived models make accurate predictions of the solar global radiation and notably outperform the existing models. -- Highlights: ► A hybrid approach is presented for the estimation of the solar global radiation. ► The proposed method integrates the capabilities of GP and SA. ► Several climatological and meteorological parameters are included in the analysis. ► The GP/SA models make accurate predictions of the solar global radiation.

  20. Caveat Emptor: The computable general equilibrium approach to assessing the poverty impact of trade liberalisation

    Colin Kirkpatrick; S. Serban Scrieciu

    2007-01-01

    Millennium Development Goals, increasing attention is being given to the poverty impacts of trade liberalisation. The high level of public interest has stimulated renewed research activity aimed at expanding the evidence-base available to trade negotiators and policymakers. Within this context, computable general equilibrium economic modelling tools have been widely employed. This paper provides a critical assessment of the CGE economic modelling approach to assessing the impact of multilater...