WorldWideScience

Sample records for accurate computational approach

  1. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    Energy Technology Data Exchange (ETDEWEB)

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.

  2. Accurate computation of Mathieu functions

    CERN Document Server

    Bibby, Malcolm M

    2013-01-01

    This lecture presents a modern approach for the computation of Mathieu functions. These functions find application in boundary value analysis such as electromagnetic scattering from elliptic cylinders and flat strips, as well as the analogous acoustic and optical problems, and many other applications in science and engineering. The authors review the traditional approach used for these functions, show its limitations, and provide an alternative ""tuned"" approach enabling improved accuracy and convergence. The performance of this approach is investigated for a wide range of parameters and mach

  3. Two-component density functional theory within the projector augmented-wave approach: Accurate and self-consistent computations of positron lifetimes and momentum distributions

    Science.gov (United States)

    Wiktor, Julia; Jomard, Gérald; Torrent, Marc

    2015-09-01

    Many techniques have been developed in the past in order to compute positron lifetimes in materials from first principles. However, there is still a lack of a fast and accurate self-consistent scheme that could handle accurately the forces acting on the ions induced by the presence of the positron. We will show in this paper that we have reached this goal by developing the two-component density functional theory within the projector augmented-wave (PAW) method in the open-source code abinit. This tool offers the accuracy of the all-electron methods with the computational efficiency of the plane-wave ones. We can thus deal with supercells that contain few hundreds to thousands of atoms to study point defects as well as more extended defects clusters. Moreover, using the PAW basis set allows us to use techniques able to, for instance, treat strongly correlated systems or spin-orbit coupling, which are necessary to study heavy elements, such as the actinides or their compounds.

  4. Accurate paleointensities - the multi-method approach

    Science.gov (United States)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  5. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    Science.gov (United States)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  6. INDUS - a composition-based approach for rapid and accurate taxonomic classification of metagenomic sequences

    OpenAIRE

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Reddy, Rachamalla Maheedhar; Reddy, Chennareddy Venkata Siva Kumar; Singh, Nitin Kumar; Sharmila S Mande

    2011-01-01

    Background Taxonomic classification of metagenomic sequences is the first step in metagenomic analysis. Existing taxonomic classification approaches are of two types, similarity-based and composition-based. Similarity-based approaches, though accurate and specific, are extremely slow. Since, metagenomic projects generate millions of sequences, adopting similarity-based approaches becomes virtually infeasible for research groups having modest computational resources. In this study, we present ...

  7. Combinatorial Approaches to Accurate Identification of Orthologous Genes

    OpenAIRE

    Shi, Guanqun

    2011-01-01

    The accurate identification of orthologous genes across different species is a critical and challenging problem in comparative genomics and has a wide spectrum of biological applications including gene function inference, evolutionary studies and systems biology. During the past several years, many methods have been proposed for ortholog assignment based on sequence similarity, phylogenetic approaches, synteny information, and genome rearrangement. Although these methods share many commonly a...

  8. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  9. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Directory of Open Access Journals (Sweden)

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  10. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  11. An Integrative Approach to Accurate Vehicle Logo Detection

    Directory of Open Access Journals (Sweden)

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  12. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    International Nuclear Information System (INIS)

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  13. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. PMID:27498635

  14. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.

  15. Compiler for Fast, Accurate Mathematical Computing on Integer Processors Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposers will develop a computer language compiler to enable inexpensive, low-power, integer-only processors to carry our mathematically-intensive...

  16. Accurate molecular structure and spectroscopic properties for nucleobases: A combined computational - microwave investigation of 2-thiouracil as a case study

    Science.gov (United States)

    Puzzarini, Cristina; Biczysko, Malgorzata; Barone, Vincenzo; Peña, Isabel; Cabezas, Carlos; Alonso, José L.

    2015-01-01

    The computational composite scheme purposely set up for accurately describing the electronic structure and spectroscopic properties of small biomolecules has been applied to the first study of the rotational spectrum of 2-thiouracil. The experimental investigation was made possible thanks to the combination of the laser ablation technique with Fourier Transform Microwave spectrometers. The joint experimental – computational study allowed us to determine accurate molecular structure and spectroscopic properties for the title molecule, but more important, it demonstrates a reliable approach for the accurate investigation of isolated small biomolecules. PMID:24002739

  17. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  18. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    International Nuclear Information System (INIS)

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab

  19. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  20. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    Directory of Open Access Journals (Sweden)

    Usman Khan

    2014-04-01

    Full Text Available Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h. The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication.

  1. A new approach to constructing efficient stiffly accurate EPIRK methods

    Science.gov (United States)

    Rainwater, G.; Tokman, M.

    2016-10-01

    The structural flexibility of the exponential propagation iterative methods of Runge-Kutta type (EPIRK) enables construction of particularly efficient exponential time integrators. While the EPIRK methods have been shown to perform well on stiff problems, all of the schemes proposed up to now have been derived using classical order conditions. In this paper we extend the stiff order conditions and the convergence theory developed for the exponential Rosenbrock methods to the EPIRK integrators. We derive stiff order conditions for the EPIRK methods and develop algorithms to solve them to obtain specific schemes. Moreover, we propose a new approach to constructing particularly efficient EPIRK integrators that are optimized to work with an adaptive Krylov algorithm. We use a set of numerical examples to illustrate the computational advantages that the newly constructed EPIRK methods offer compared to previously proposed exponential integrators.

  2. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  3. On accurate computations of bound state properties in three- and four-electron atomic systems

    CERN Document Server

    Frolov, Alexei M

    2016-01-01

    Results of accurate computations of bound states in three- and four-electron atomic systems are discussed. Bound state properties of the four-electron lithium ion Li$^{-}$ in its ground $2^{2}S-$state are determined from the results of accurate, variational computations. We also consider a closely related problem of accurate numerical evaluation of the half-life of the beryllium-7 isotope. This problem is of paramount importance for modern radiochemistry.

  4. A programming approach to computability

    CERN Document Server

    Kfoury, A J; Arbib, Michael A

    1982-01-01

    Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...

  5. Elliptic curves a computational approach

    CERN Document Server

    Schmitt, Susanne; Pethö, Attila

    2003-01-01

    The basics of the theory of elliptic curves should be known to everybody, be he (or she) a mathematician or a computer scientist. Especially everybody concerned with cryptography should know the elements of this theory. The purpose of the present textbook is to give an elementary introduction to elliptic curves. Since this branch of number theory is particularly accessible to computer-assisted calculations, the authors make use of it by approaching the theory under a computational point of view. Specifically, the computer-algebra package SIMATH can be applied on several occasions. However, the book can be read also by those not interested in any computations. Of course, the theory of elliptic curves is very comprehensive and becomes correspondingly sophisticated. That is why the authors made a choice of the topics treated. Topics covered include the determination of torsion groups, computations regarding the Mordell-Weil group, height calculations, S-integral points. The contents is kept as elementary as poss...

  6. A semi-empirical approach to accurate standard enthalpies of formation for solid hydrides

    Energy Technology Data Exchange (ETDEWEB)

    Klaveness, A. [Department of Chemistry, University of Oslo, P.O. Box 1033, Blindern, N-0315 Oslo (Norway)], E-mail: arnekla@kjemi.uio.no; Fjellvag, H.; Kjekshus, A.; Ravindran, P. [Department of Chemistry, University of Oslo, P.O. Box 1033, Blindern, N-0315 Oslo (Norway); Swang, O. [SINTEF Materials and Chemistry, P.O. Box 124, Blindern, N-0314 Oslo (Norway)

    2009-02-05

    A semi-empirical method for estimation of enthalpies of formation of solid hydrides is proposed. The method is named Ionic for short. By combining experimentally known enthalpies of formation for simple hydrides and reaction energies computed using band-structure density functional theory (DFT) methods, startling accurate results can be achieved. The approach relies on cancellation of errors when comparing DFT energies for systems with similar electronic structures. The influence of zero-point energies, polaritons, and vibrational excitations on the results has been examined and found to be minor.

  7. Computational approaches to energy materials

    CERN Document Server

    Catlow, Richard; Walsh, Aron

    2013-01-01

    The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process.   Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the

  8. Development of highly accurate approximate scheme for computing the charge transfer integral.

    Science.gov (United States)

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117

  9. Development of highly accurate approximate scheme for computing the charge transfer integral

    Energy Technology Data Exchange (ETDEWEB)

    Pershin, Anton; Szalay, Péter G. [Laboratory for Theoretical Chemistry, Institute of Chemistry, Eötvös Loránd University, P.O. Box 32, H-1518 Budapest (Hungary)

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  10. A computational methodology for formulating gasoline surrogate fuels with accurate physical and chemical kinetic properties

    KAUST Repository

    Ahmed, Ahfaz

    2015-03-01

    Gasoline is the most widely used fuel for light duty automobile transportation, but its molecular complexity makes it intractable to experimentally and computationally study the fundamental combustion properties. Therefore, surrogate fuels with a simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating surrogates for FACE (fuels for advanced combustion engines) gasolines A and C by combining regression modeling with physical and chemical kinetics simulations. The computational methodology integrates simulation tools executed across different software platforms. Initially, the palette of surrogate species and carbon types for the target fuels were determined from a detailed hydrocarbon analysis (DHA). A regression algorithm implemented in MATLAB was linked to REFPROP for simulation of distillation curves and calculation of physical properties of surrogate compositions. The MATLAB code generates a surrogate composition at each iteration, which is then used to automatically generate CHEMKIN input files that are submitted to homogeneous batch reactor simulations for prediction of research octane number (RON). The regression algorithm determines the optimal surrogate composition to match the fuel properties of FACE A and C gasoline, specifically hydrogen/carbon (H/C) ratio, density, distillation characteristics, carbon types, and RON. The optimal surrogate fuel compositions obtained using the present computational approach was compared to the real fuel properties, as well as with surrogate compositions available in the literature. Experiments were conducted within a Cooperative Fuels Research (CFR) engine operating under controlled autoignition (CAI) mode to compare the formulated surrogates against the real fuels. Carbon monoxide measurements indicated that the proposed surrogates

  11. Proposed Enhanced Object Recognition Approach for Accurate Bionic Eyes

    Directory of Open Access Journals (Sweden)

    Mohammad Shkoukani

    2012-07-01

    Full Text Available AI has played a huge role in image formation and recognition, but all built on the supervised and unsupervised learning algorithms the learning agents follow. Neural networks have also a role in bionic eyes integration but it is not discussed thoroughly in this paper. The chip to be implanted, which is a robotic device that applies methods developed in machine learning, consists of large scale algorithms for feature learning to construct classifiers for object detection and recognition, to input in the chip system. The challenge however is in identifying a complex image, which may require combined processes of learning features algorithms. In this paper an experimented approaches are stated for individual case of concentration of objects to obtain a high recognition outcome. Each approach may influence one angle, and a suggested non-experimented approach may give a better visual aid for bionic recognition and identification, using more learning and testing methods. The paper discusses the different approached of kernel and convolutional methods to classify objects, in addition to a proposed model to extract a maximized optimization of object formation and recognition. The proposed model combines variety of algorithms that have been experimented in differed related works and uses different learning approaches to handle large datasets in training.

  12. Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights

    KAUST Repository

    Hale, Nicholas

    2013-03-06

    An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton\\'s root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.

  13. GRID COMPUTING AND CHECKPOINT APPROACH

    Directory of Open Access Journals (Sweden)

    Pankaj gupta

    2011-05-01

    Full Text Available Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation or problem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. In this paper we investigate the different techniques of fault tolerance which are used in many real time distributed systems. The main focus is on types of fault occurring in the system, fault detection techniques and the recovery techniques used. A fault can occur due to link failure, resource failure or by any other reason is to be tolerated for working the system smoothly and accurately. These faults can be detected and recovered by many techniques used accordingly. An appropriate fault detector can avoid loss due to system crash and reliable fault tolerance technique can save from system failure. This paper provides how these methods are applied to detect and tolerate faults from various Real Time Distributed Systems. The advantages of utilizing the check pointing functionality are obvious; however so far the Grid community has notdeveloped a widely accepted standard that would allow the Gridenvironment to consciously utilize low level check pointing packages.Therefore, such a standard named Grid Check pointing Architecture isbeing designed. The fault tolerance mechanism used here sets the jobcheckpoints based on the resource failure rate. If resource failureoccurs, the job is restarted from its last successful state using acheckpoint file from another grid resource. A critical aspect for anautomatic recovery is the availability of checkpoint files. A strategy to increase the availability of checkpoints is replication. Grid is a form distributed computing mainly to virtualizes and utilize geographically distributed idle resources. A grid is a distributed computational and storage environment often composed of

  14. Computational Approaches to Vestibular Research

    Science.gov (United States)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  15. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  16. Can computer simulators accurately represent the pathophysiology of individual COPD patients?

    OpenAIRE

    Wang, Wenfei; Das, Anup; Ali, Tayyba; Cole, Oanna; Chikhani, Marc; Haque, Mainul; Hardman, Jonathan G; Bates, Declan G

    2014-01-01

    Background Computer simulation models could play a key role in developing novel therapeutic strategies for patients with chronic obstructive pulmonary disease (COPD) if they can be shown to accurately represent the pathophysiological characteristics of individual patients. Methods We evaluated the capability of a computational simulator to reproduce the heterogeneous effects of COPD on alveolar mechanics as captured in a number of different patient datasets. Results Our results show that accu...

  17. Petascale self-consistent electromagnetic computations using scalable and accurate algorithms for complex structures

    International Nuclear Information System (INIS)

    As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly

  18. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    Institute of Scientific and Technical Information of China (English)

    F. Djeffal; A. Ferdi; M. Chahdi

    2012-01-01

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design.To develop a physical model for extremely scaled DG MOSFETs,the drain current in the channel must be accurately determined under the application of drain and gate voltages.However,modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent,quantum computations ).Therefore,new methods and techniques are required to overcome these constraints.In this paper,a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs.The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design.The approach is general and thus is suitable for any type ofnanoscale structure investigation problems in the nanotechnology industry.

  19. Aeroacoustic Flow Phenomena Accurately Captured by New Computational Fluid Dynamics Method

    Science.gov (United States)

    Blech, Richard A.

    2002-01-01

    One of the challenges in the computational fluid dynamics area is the accurate calculation of aeroacoustic phenomena, especially in the presence of shock waves. One such phenomenon is "transonic resonance," where an unsteady shock wave at the throat of a convergent-divergent nozzle results in the emission of acoustic tones. The space-time Conservation-Element and Solution-Element (CE/SE) method developed at the NASA Glenn Research Center can faithfully capture the shock waves, their unsteady motion, and the generated acoustic tones. The CE/SE method is a revolutionary new approach to the numerical modeling of physical phenomena where features with steep gradients (e.g., shock waves, phase transition, etc.) must coexist with those having weaker variations. The CE/SE method does not require the complex interpolation procedures (that allow for the possibility of a shock between grid cells) used by many other methods to transfer information between grid cells. These interpolation procedures can add too much numerical dissipation to the solution process. Thus, while shocks are resolved, weaker waves, such as acoustic waves, are washed out.

  20. Computational Approaches to Vestibular Research

    Science.gov (United States)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  1. Computational approach to Riemann surfaces

    CERN Document Server

    Klein, Christian

    2011-01-01

    This volume offers a well-structured overview of existent computational approaches to Riemann surfaces and those currently in development. The authors of the contributions represent the groups providing publically available numerical codes in this field. Thus this volume illustrates which software tools are available and how they can be used in practice. In addition examples for solutions to partial differential equations and in surface theory are presented. The intended audience of this book is twofold. It can be used as a textbook for a graduate course in numerics of Riemann surfaces, in which case the standard undergraduate background, i.e., calculus and linear algebra, is required. In particular, no knowledge of the theory of Riemann surfaces is expected; the necessary background in this theory is contained in the Introduction chapter. At the same time, this book is also intended for specialists in geometry and mathematical physics applying the theory of Riemann surfaces in their research. It is the first...

  2. Computer-based personality judgments are more accurate than those made by humans.

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

  3. Computer-based personality judgments are more accurate than those made by humans.

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  4. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    Science.gov (United States)

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants. PMID:24216719

  5. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2011-01-01

    The computing world today is in the middle of a revolution: mobile clients and cloud computing have emerged as the dominant paradigms driving programming and hardware innovation today. The Fifth Edition of Computer Architecture focuses on this dramatic shift, exploring the ways in which software and technology in the cloud are accessed by cell phones, tablets, laptops, and other mobile computing devices. Each chapter includes two real-world examples, one mobile and one datacenter, to illustrate this revolutionary change.Updated to cover the mobile computing revolutionEmphasizes the two most im

  6. Accurate computation of Stokes flow driven by an open immersed interface

    Science.gov (United States)

    Li, Yi; Layton, Anita T.

    2012-06-01

    We present numerical methods for computing two-dimensional Stokes flow driven by forces singularly supported along an open, immersed interface. Two second-order accurate methods are developed: one for accurately evaluating boundary integral solutions at a point, and another for computing Stokes solution values on a rectangular mesh. We first describe a method for computing singular or nearly singular integrals, such as a double layer potential due to sources on a curve in the plane, evaluated at a point on or near the curve. To improve accuracy of the numerical quadrature, we add corrections for the errors arising from discretization, which are found by asymptotic analysis. When used to solve the Stokes equations with sources on an open, immersed interface, the method generates second-order approximations, for both the pressure and the velocity, and preserves the jumps in the solutions and their derivatives across the boundary. We then combine the method with a mesh-based solver to yield a hybrid method for computing Stokes solutions at N2 grid points on a rectangular grid. Numerical results are presented which exhibit second-order accuracy. To demonstrate the applicability of the method, we use the method to simulate fluid dynamics induced by the beating motion of a cilium. The method preserves the sharp jumps in the Stokes solution and their derivatives across the immersed boundary. Model results illustrate the distinct hydrodynamic effects generated by the effective stroke and by the recovery stroke of the ciliary beat cycle.

  7. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    Science.gov (United States)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  8. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    Science.gov (United States)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  9. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    Science.gov (United States)

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  10. Genetic crossovers are predicted accurately by the computed human recombination map.

    Directory of Open Access Journals (Sweden)

    Pavel P Khil

    2010-01-01

    Full Text Available Hotspots of meiotic recombination can change rapidly over time. This instability and the reported high level of inter-individual variation in meiotic recombination puts in question the accuracy of the calculated hotspot map, which is based on the summation of past genetic crossovers. To estimate the accuracy of the computed recombination rate map, we have mapped genetic crossovers to a median resolution of 70 Kb in 10 CEPH pedigrees. We then compared the positions of crossovers with the hotspots computed from HapMap data and performed extensive computer simulations to compare the observed distributions of crossovers with the distributions expected from the calculated recombination rate maps. Here we show that a population-averaged hotspot map computed from linkage disequilibrium data predicts well present-day genetic crossovers. We find that computed hotspot maps accurately estimate both the strength and the position of meiotic hotspots. An in-depth examination of not-predicted crossovers shows that they are preferentially located in regions where hotspots are found in other populations. In summary, we find that by combining several computed population-specific maps we can capture the variation in individual hotspots to generate a hotspot map that can predict almost all present-day genetic crossovers.

  11. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    Science.gov (United States)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  12. Fast and Accurate Computation of Time-Domain Acoustic Scattering Problems with Exact Nonreflecting Boundary Conditions

    CERN Document Server

    Wang, Li-Lian; Zhao, Xiaodan

    2011-01-01

    This paper is concerned with fast and accurate computation of exterior wave equations truncated via exact circular or spherical nonreflecting boundary conditions (NRBCs, which are known to be nonlocal in both time and space). We first derive analytic expressions for the underlying convolution kernels, which allow for a rapid and accurate evaluation of the convolution with $O(N_t)$ operations over $N_t$ successive time steps. To handle the onlocality in space, we introduce the notion of boundary perturbation, which enables us to handle general bounded scatters by solving a sequence of wave equations in a regular domain. We propose an efficient spectral-Galerkin solver with Newmark's time integration for the truncated wave equation in the regular domain. We also provide ample numerical results to show high-order accuracy of NRBCs and efficiency of the proposed scheme.

  13. Efficient reconfigurable hardware architecture for accurately computing success probability and data complexity of linear attacks

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar;

    2012-01-01

    An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... in a vast range of parameters. The new hardware architecture allows us to verify the existing theoretical models for the complexity estimation in linear cryptanalysis. The designed hardware architecture is realized on two Xilinx Virtex-6 XC6VLX240T FPGAs for smaller block lengths, and on RIVYERA platform...

  14. A particle-tracking approach for accurate material derivative measurements with tomographic PIV

    Science.gov (United States)

    Novara, Matteo; Scarano, Fulvio

    2013-08-01

    The evaluation of the instantaneous 3D pressure field from tomographic PIV data relies on the accurate estimate of the fluid velocity material derivative, i.e., the velocity time rate of change following a given fluid element. To date, techniques that reconstruct the fluid parcel trajectory from a time sequence of 3D velocity fields obtained with Tomo-PIV have already been introduced. However, an accurate evaluation of the fluid element acceleration requires trajectory reconstruction over a relatively long observation time, which reduces random errors. On the other hand, simple integration and finite difference techniques suffer from increasing truncation errors when complex trajectories need to be reconstructed over a long time interval. In principle, particle-tracking velocimetry techniques (3D-PTV) enable the accurate reconstruction of single particle trajectories over a long observation time. Nevertheless, PTV can be reliably performed only at limited particle image number density due to errors caused by overlapping particles. The particle image density can be substantially increased by use of tomographic PIV. In the present study, a technique to combine the higher information density of tomographic PIV and the accurate trajectory reconstruction of PTV is proposed (Tomo-3D-PTV). The particle-tracking algorithm is applied to the tracers detected in the 3D domain obtained by tomographic reconstruction. The 3D particle information is highly sparse and intersection of trajectories is virtually impossible. As a result, ambiguities in the particle path identification over subsequent recordings are easily avoided. Polynomial fitting functions are introduced that describe the particle position in time with sequences based on several recordings, leading to the reduction in truncation errors for complex trajectories. Moreover, the polynomial regression approach provides a reduction in the random errors due to the particle position measurement. Finally, the acceleration

  15. A novel approach for latent print identification using accurate overlays to prioritize reference prints.

    Science.gov (United States)

    Gantz, Daniel T; Gantz, Donald T; Walch, Mark A; Roberts, Maria Antonia; Buscaglia, JoAnn

    2014-12-01

    A novel approach to automated fingerprint matching and scoring that produces accurate locally and nonlinearly adjusted overlays of a latent print onto each reference print in a corpus is described. The technology, which addresses challenges inherent to latent prints, provides the latent print examiner with a prioritized ranking of candidate reference prints based on the overlays of the latent onto each candidate print. In addition to supporting current latent print comparison practices, this approach can make it possible to return a greater number of AFIS candidate prints because the ranked overlays provide a substantial starting point for latent-to-reference print comparison. To provide the image information required to create an accurate overlay of a latent print onto a reference print, "Ridge-Specific Markers" (RSMs), which correspond to short continuous segments of a ridge or furrow, are introduced. RSMs are reliably associated with any specific local section of a ridge or a furrow using the geometric information available from the image. Latent prints are commonly fragmentary, with reduced clarity and limited minutiae (i.e., ridge endings and bifurcations). Even in the absence of traditional minutiae, latent prints contain very important information in their ridges that permit automated matching using RSMs. No print orientation or information beyond the RSMs is required to generate the overlays. This automated process is applied to the 88 good quality latent prints in the NIST Special Database (SD) 27. Nonlinear overlays of each latent were produced onto all of the 88 reference prints in the NIST SD27. With fully automated processing, the true mate reference prints were ranked in the first candidate position for 80.7% of the latents tested, and 89.8% of the true mate reference prints ranked in the top ten positions. After manual post-processing of those latents for which the true mate reference print was not ranked first, these frequencies increased to 90

  16. Toward detailed prominence seismology - I. Computing accurate 2.5D magnetohydrodynamic equilibria

    CERN Document Server

    Blokland, J W S

    2011-01-01

    Context. Prominence seismology exploits our knowledge of the linear eigenoscillations for representative magnetohydro- dynamic models of filaments. To date, highly idealized models for prominences have been used, especially with respect to the overall magnetic configurations. Aims. We initiate a more systematic survey of filament wave modes, where we consider full multi-dimensional models with twisted magnetic fields representative of the surrounding magnetic flux rope. This requires the ability to compute accurate 2.5 dimensional magnetohydrodynamic equilibria that balance Lorentz forces, gravity, and pressure gradients, while containing density enhancements (static or in motion). Methods. The governing extended Grad-Shafranov equation is discussed, along with an analytic prediction for circular flux ropes for the Shafranov shift of the central magnetic axis due to gravity. Numerical equilibria are computed with a finite element-based code, demonstrating fourth order accuracy on an explicitly known, non-triv...

  17. Soft Computing Approaches To Fault Tolerant Systems

    Directory of Open Access Journals (Sweden)

    Neeraj Prakash Srivastava

    2014-05-01

    Full Text Available We present in this paper as an introduction to soft computing techniques for fault tolerant systems and the terminology with different ways of achieving fault tolerance. The paper focuses on the problem of fault tolerance using soft computing techniques. The fundamentals of soft computing approaches and its type with introduction of fault tolerance are discussed. The main objective is to show how to implement soft computing approaches for fault detection, isolation and identification. The paper contains details about soft computing application with an application of wireless sensor network as fault tolerant system.

  18. DNA Reservoir Computing: A Novel Molecular Computing Approach

    CERN Document Server

    Goudarzi, Alireza; Stefanovic, Darko

    2013-01-01

    We propose a novel molecular computing approach based on reservoir computing. In reservoir computing, a dynamical core, called a reservoir, is perturbed with an external input signal while a readout layer maps the reservoir dynamics to a target output. Computation takes place as a transformation from the input space to a high-dimensional spatiotemporal feature space created by the transient dynamics of the reservoir. The readout layer then combines these features to produce the target output. We show that coupled deoxyribozyme oscillators can act as the reservoir. We show that despite using only three coupled oscillators, a molecular reservoir computer could achieve 90% accuracy on a benchmark temporal problem.

  19. Antenna arrays a computational approach

    CERN Document Server

    Haupt, Randy L

    2010-01-01

    This book covers a wide range of antenna array topics that are becoming increasingly important in wireless applications, particularly in design and computer modeling. Signal processing and numerical modeling algorithms are explored, and MATLAB computer codes are provided for many of the design examples. Pictures of antenna arrays and components provided by industry and government sources are presented with explanations of how they work. Antenna Arrays is a valuable reference for practicing engineers and scientists in wireless communications, radar, and remote sensing, and an excellent textbook for advanced antenna courses.

  20. A Novel MoM Approach for Obtaining Accurate and Efficient Solutions in Optical Rib Waveguide

    OpenAIRE

    YENER, Namık

    2002-01-01

    The optical rib waveguide (ORW) plays an important role in the design of several integrated optical devices. Various methods have been proposed for obtaining the modal field solutions in ORW. However, to the best of our knowledge none of them is capable of providing accurate full-wave benchmark solutions. Here we present a novel MoM approach wherein the modes of a loaded rectangular waveguide are utilized as basis functions and demonstrate that this approach is very efficient and yie...

  1. Ontological Approach toward Cybersecurity in Cloud Computing

    OpenAIRE

    Takahashi, Takeshi; Kadobayashi, Youki; FUJIWARA, HIROYUKI

    2014-01-01

    Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an...

  2. A best-estimate plus uncertainty type analysis for computing accurate critical channel power uncertainties

    International Nuclear Information System (INIS)

    This paper provides a Critical Channel Power (CCP) uncertainty analysis methodology based on a Monte-Carlo approach. This Monte-Carlo method includes the identification of the sources of uncertainty and the development of error models for the characterization of epistemic and aleatory uncertainties associated with the CCP parameter. Furthermore, the proposed method facilitates a means to use actual operational data leading to improvements over traditional methods (e.g., sensitivity analysis) which assume parametric models that may not accurately capture the possible complex statistical structures in the system input and responses. (author)

  3. Immune based computer virus detection approaches

    Institute of Scientific and Technical Information of China (English)

    TAN Ying; ZHANG Pengtao

    2013-01-01

    The computer virus is considered one of the most horrifying threats to the security of computer systems worldwide.The rapid development of evasion techniques used in virus causes the signature based computer virus detection techniques to be ineffective.Many novel computer virus detection approaches have been proposed in the past to cope with the ineffectiveness,mainly classified into three categories:static,dynamic and heuristics techniques.As the natural similarities between the biological immune system (BIS),computer security system (CSS),and the artificial immune system (AIS) were all developed as a new prototype in the community of anti-virus research.The immune mechanisms in the BIS provide the opportunities to construct computer virus detection models that are robust and adaptive with the ability to detect unseen viruses.In this paper,a variety of classic computer virus detection approaches were introduced and reviewed based on the background knowledge of the computer virus history.Next,a variety of immune based computer virus detection approaches were also discussed in detail.Promising experimental results suggest that the immune based computer virus detection approaches were able to detect new variants and unseen viruses at lower false positive rates,which have paved a new way for the anti-virus research.

  4. COMPUTATIONAL APPROACH TO ORGANIZATIONAL DESIGN

    OpenAIRE

    Alexander Arenas; Roger Guimera; Joan R. Alabart; Hans-Joerg Witt; Albert Diaz-Guilera

    2000-01-01

    The idea of the work is to propose an abstract and simple enough agent-based model for company dynamics, in order to be able to deal computationally and even analytically with the problem of organizational design. Nevertheless, the model should be able to reproduce the essential characteristics of real organizations.The natural way of modeling a company is as being a network where the nodes represent employees and the links between them represent communication lines. In our model, problems ar...

  5. [The determinant role of an accurate medicosocial approach in the prognosis of pediatric blood diseases].

    Science.gov (United States)

    Toppet, M

    2005-01-01

    The care of infancy and childhood blood diseases implies a comprehensive medicosocial approach. This is a prerequisite for regular follow-up, for satisfactory compliance to treatment and for optimal patient's quality of life. Different modalities of medicosocial approach have been developed in the pediatric department (firstly in the Hospital Saint Pierre and than in the Children's University Hospital HUDERF). The drastic importance of a recent reform of the increased family allowances is briefly presented. The author underlines the determinant role of an accurate global approach, in which the patient and the family are surrounded by a multidisciplinary team, including social workers. PMID:16454232

  6. Molecules-in-Molecules: An Extrapolated Fragment-Based Approach for Accurate Calculations on Large Molecules and Materials.

    Science.gov (United States)

    Mayhall, Nicholas J; Raghavachari, Krishnan

    2011-05-10

    We present a new extrapolated fragment-based approach, termed molecules-in-molecules (MIM), for accurate energy calculations on large molecules. In this method, we use a multilevel partitioning approach coupled with electronic structure studies at multiple levels of theory to provide a hierarchical strategy for systematically improving the computed results. In particular, we use a generalized hybrid energy expression, similar in spirit to that in the popular ONIOM methodology, that can be combined easily with any fragmentation procedure. In the current work, we explore a MIM scheme which first partitions a molecule into nonoverlapping fragments and then recombines the interacting fragments to form overlapping subsystems. By including all interactions with a cheaper level of theory, the MIM approach is shown to significantly reduce the errors arising from a single level fragmentation procedure. We report the implementation of energies and gradients and the initial assessment of the MIM method using both biological and materials systems as test cases. PMID:26610128

  7. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2007-01-01

    The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelis

  8. Computational approaches for systems metabolomics.

    Science.gov (United States)

    Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J

    2016-06-01

    Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field.

  9. Accurate computation of wave loads on a bottom fixed circular cylinder

    DEFF Research Database (Denmark)

    Paulsen, Bo Terp; Bredmose, Henrik; Bingham, Harry B.

    2012-01-01

    This abstract describes recent progress in the development of a fast and accurate tool for computations of wave-structure interactions of realistic sea states that include breaking waves. The practical motivation is extreme wave loads on offshore wind turbine foundations, but the tool is applicable......-dimensional water waves up to the point of breaking. The CFD solver is the open source CFD toolbox OpenFOAMR in combination with the newly developed waves2Foam utility, which in [5] has been successfully applied to calculations of free surface flows. The numerical solution is obtained by solving the incompressible...... Navier-Stokes equations in combination with a surface tracking scheme. The CFD solver has been thoroughly tested for stability and first order grid convergence has been shown for the propagation of stream function waves. Here we present results for the magnitudes, of the third-harmonic forces...

  10. Learning and geometry computational approaches

    CERN Document Server

    Smith, Carl

    1996-01-01

    The field of computational learning theory arose out of the desire to for­ mally understand the process of learning. As potential applications to artificial intelligence became apparent, the new field grew rapidly. The learning of geo­ metric objects became a natural area of study. The possibility of using learning techniques to compensate for unsolvability provided an attraction for individ­ uals with an immediate need to solve such difficult problems. Researchers at the Center for Night Vision were interested in solving the problem of interpreting data produced by a variety of sensors. Current vision techniques, which have a strong geometric component, can be used to extract features. However, these techniques fall short of useful recognition of the sensed objects. One potential solution is to incorporate learning techniques into the geometric manipulation of sensor data. As a first step toward realizing such a solution, the Systems Research Center at the University of Maryland, in conjunction with the C...

  11. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  12. A constrained variable projection reconstruction method for photoacoustic computed tomography without accurate knowledge of transducer responses

    CERN Document Server

    Sheng, Qiwei; Matthews, Thomas P; Xia, Jun; Zhu, Liren; Wang, Lihong V; Anastasio, Mark A

    2015-01-01

    Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. When the imaging system employs conventional piezoelectric ultrasonic transducers, the ideal photoacoustic (PA) signals are degraded by the transducers' acousto-electric impulse responses (EIRs) during the measurement process. If unaccounted for, this can degrade the accuracy of the reconstructed image. In principle, the effect of the EIRs on the measured PA signals can be ameliorated via deconvolution; images can be reconstructed subsequently by application of a reconstruction method that assumes an idealized EIR. Alternatively, the effect of the EIR can be incorporated into an imaging model and implicitly compensated for during reconstruction. In either case, the efficacy of the correction can be limited by errors in the assumed EIRs. In this work, a joint optimization approach to PACT image r...

  13. GRID COMPUTING AND FAULT TOLERANCE APPROACH

    Directory of Open Access Journals (Sweden)

    Pankaj Gupta,

    2011-10-01

    Full Text Available Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation orproblem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. This paper proposes a method to achieve maximum fault tolerance in the Grid environment system by using Reliability consideration by using Replication approach and Check-point approach. Fault tolerance is an important property for large scale computational grid systems, where geographically distributed nodes co-operate to execute a task. In order to achieve high level of reliability and availability, the grid infrastructure should be a foolproof fault tolerant. Since the failure of resources affects job execution fatally, fault tolerance service is essential to satisfy QOS requirement in grid computing. Commonly utilized techniques for providing fault tolerance are job check pointing and replication. Both techniques mitigate the amount of work lost due to changing system availability but can introduce significant runtime overhead. The latter largely depends on the length of check pointing interval and the chosen number of replicas, respectively. In case of complex scientific workflows where tasks can execute in well defined order reliability is another biggest challenge because of the unreliable nature of the grid resources.

  14. On the Fourier expansion method for highly accurate computation of the Voigt/complex error function in a rapid algorithm

    CERN Document Server

    Abrarov, S M

    2012-01-01

    In our recent publication [1] we presented an exponential series approximation suitable for highly accurate computation of the complex error function in a rapid algorithm. In this Short Communication we describe how a simplified representation of the proposed complex error function approximation makes possible further algorithmic optimization resulting in a considerable computational acceleration without compromise on accuracy.

  15. Accurate Computation of Reduction Potentials of 4Fe−4S Clusters Indicates a Carboxylate Shift in Pyrococcus furiosus Ferredoxin

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta; Ooi, Bee Lean; Christensen, Hans Erik Mølager

    2007-01-01

    This work describes the computation and accurate reproduction of subtle shifts in reduction potentials for two mutants of the iron-sulfur protein Pyrococcus furiosus ferredoxin. The computational models involved only first-sphere ligands and differed with respect to one ligand, either acetate (as...

  16. Toward exascale computing through neuromorphic approaches.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  17. Accurate computation of surface stresses and forces with immersed boundary methods

    Science.gov (United States)

    Goza, Andres; Liska, Sebastian; Morley, Benjamin; Colonius, Tim

    2016-09-01

    Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is applied as a post-processing procedure, so that the convergence of the velocity field is unaffected. We demonstrate the efficacy of the method by computing stresses and forces that converge to the physical stresses and forces for several test problems.

  18. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  19. Towards Lagrangian approach to quantum computations

    CERN Document Server

    Vlasov, A Yu

    2003-01-01

    In this work is discussed possibility and actuality of Lagrangian approach to quantum computations. Finite-dimensional Hilbert spaces used in this area provide some challenge for such consideration. The model discussed here can be considered as an analogue of Weyl quantization of field theory via path integral in L. D. Faddeev's approach. Weyl quantization is possible to use also in finite-dimensional case, and some formulas may be simply rewritten with change of integrals to finite sums. On the other hand, there are specific difficulties relevant to finite case. This work has some allusions with phase space models of quantum computations developed last time by different authors.

  20. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    Science.gov (United States)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  1. A novel fast and accurate pseudo-analytical simulation approach for MOAO

    KAUST Repository

    Gendron, É.

    2014-08-04

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is

  2. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    Science.gov (United States)

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  3. Late enhanced computed tomography in Hypertrophic Cardiomyopathy enables accurate left-ventricular volumetry

    Energy Technology Data Exchange (ETDEWEB)

    Langer, Christoph; Lutz, M.; Kuehl, C.; Frey, N. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany); Partner Site Hamburg/Kiel/Luebeck, DZHK (German Centre for Cardiovascular Research), Kiel (Germany); Both, M.; Sattler, B.; Jansen, O; Schaefer, P. [Christian-Albrechts-Universitaet Kiel, Department of Diagnostic Radiology, University Medical Center Schleswig-Holstein (Germany); Harders, H.; Eden, M. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany)

    2014-10-15

    Late enhancement (LE) multi-slice computed tomography (leMDCT) was introduced for the visualization of (intra-) myocardial fibrosis in Hypertrophic Cardiomyopathy (HCM). LE is associated with adverse cardiac events. This analysis focuses on leMDCT derived LV muscle mass (LV-MM) which may be related to LE resulting in LE proportion for potential risk stratification in HCM. N=26 HCM-patients underwent leMDCT (64-slice-CT) and cardiovascular magnetic resonance (CMR). In leMDCT iodine contrast (Iopromid, 350 mg/mL; 150mL) was injected 7 minutes before imaging. Reconstructed short cardiac axis views served for planimetry. The study group was divided into three groups of varying LV-contrast. LeMDCT was correlated with CMR. The mean age was 64.2 ± 14 years. The groups of varying contrast differed in weight and body mass index (p < 0.05). In the group with good LV-contrast assessment of LV-MM resulted in 147.4 ± 64.8 g in leMDCT vs. 147.1 ± 65.9 in CMR (p > 0.05). In the group with sufficient contrast LV-MM appeared with 172 ± 30.8 g in leMDCT vs. 165.9 ± 37.8 in CMR (p > 0.05). Overall intra-/inter-observer variability of semiautomatic assessment of LV-MM showed an accuracy of 0.9 ± 8.6 g and 0.8 ± 9.2 g in leMDCT. All leMDCT-measures correlated well with CMR (r > 0.9). LeMDCT primarily performed for LE-visualization in HCM allows for accurate LV-volumetry including LV-MM in > 90 % of the cases. (orig.)

  4. Unilateral hyperlucency of the lung: a systematic approach to accurate radiographic interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Hyung Jun; Oh, Yu Whan; Choi, Eun Jung; Seo, Bo Kyung; Cho, Kyu Ran; Kang, Eun Young; Kim, Jung Hyuk [Korea University College of Medicine, Seoul (Korea, Republic of)

    2002-12-01

    The radiographic appearance of a unilateral hyperlucent lung is related to various conditions, the accurate radiographic interpretation of which requires a structured approach as well as an awareness of the spectrum of these entities. Firstly, it is important to determine whether a hyperlucent hemithorax is associated with artifacts resulting from rotation of the patient, grid cutoff, or the heel effect. The second step is to determine whether or not a hyperlucent lung is abnormal. Lung that is in fact normal may appear hyperlucent because of diffusely increased opacity of the opposite hemithorax. Thirdly, thoracic wall and soft tissue abnormalities such as mastectomy of Poland syndrome may cause hyperinflation. Lastly, abnormalities of lung parenchyma may result in hyperlucency. Lung abnormalities and be divided into two groups: a) obstructive or compensatory hyperinflation; and b) reduced vascular perfusion of the lung due to congenital or acquired vascular abnormalities. In this article, we describe and illustrate the imaging spectrum of these causes and outline a structured approach to accurate radiographic interpretation.

  5. A Highly Accurate and Efficient Analytical Approach to Bridge Deck Free Vibration Analysis

    Directory of Open Access Journals (Sweden)

    D.J. Gorman

    2000-01-01

    Full Text Available The superposition method is employed to obtain an accurate analytical type solution for the free vibration frequencies and mode shapes of multi-span bridge decks. Free edge conditions are imposed on the long edges running in the direction of the deck. Inter-span support is of the simple (knife-edge type. The analysis is valid regardless of the number of spans or their individual lengths. Exact agreement is found when computed results are compared with known eigenvalues for bridge decks with all spans of equal length. Mode shapes and eigenvalues are presented for typical bridge decks of three and four span lengths. In each case torsional and non-torsional modes are studied.

  6. Hybrid soft computing approaches research and applications

    CERN Document Server

    Dutta, Paramartha; Chakraborty, Susanta

    2016-01-01

    The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis,  (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.

  7. Accurate modeling of size and strain broadening in the Rietveld refinement: The {open_quotes}double-Voigt{close_quotes} approach

    Energy Technology Data Exchange (ETDEWEB)

    Balzar, D. [Ruder Boskovic Inst., Zagreb (Croatia); Ledbetter, H. [National Inst. of Standards and Technology, Boulder, CO (United States)

    1995-12-31

    In the {open_quotes}double-Voigt{close_quotes} approach, an exact Voigt function describes both size- and strain-broadened profiles. The lattice strain is defined in terms of physically credible mean-square strain averaged over a distance in the diffracting domains. Analysis of Fourier coefficients in a harmonic approximation for strain coefficients leads to the Warren-Averbach method for the separation of size and strain contributions to diffraction line broadening. The model is introduced in the Rietveld refinement program in the following way: Line widths are modeled with only four parameters in the isotropic case. Varied parameters are both surface- and volume-weighted domain sizes and root-mean-square strains averaged over two distances. Refined parameters determine the physically broadened Voigt line profile. Instrumental Voigt line profile parameters are added to obtain the observed (Voigt) line profile. To speed computation, the corresponding pseudo-Voigt function is calculated and used as a fitting function in refinement. This approach allows for both fast computer code and accurate modeling in terms of physically identifiable parameters.

  8. Computational chemical imaging for cardiovascular pathology: chemical microscopic imaging accurately determines cardiac transplant rejection.

    Directory of Open Access Journals (Sweden)

    Saumya Tiwari

    Full Text Available Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients' biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures.

  9. Computational Approach for Developing Blood Pump

    Science.gov (United States)

    Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides an overview of the computational approach to developing a ventricular assist device (VAD) which utilizes NASA aerospace technology. The VAD is used as a temporary support to sick ventricles for those who suffer from late stage congestive heart failure (CHF). The need for donor hearts is much greater than their availability, and the VAD is seen as a bridge-to-transplant. The computational issues confronting the design of a more advanced, reliable VAD include the modelling of viscous incompressible flow. A computational approach provides the possibility of quantifying the flow characteristics, which is especially valuable for analyzing compact design with highly sensitive operating conditions. Computational fluid dynamics (CFD) and rocket engine technology has been applied to modify the design of a VAD which enabled human transplantation. The computing requirement for this project is still large, however, and the unsteady analysis of the entire system from natural heart to aorta involves several hundred revolutions of the impeller. Further study is needed to assess the impact of mechanical VADs on the human body

  10. Handbook of computational approaches to counterterrorism

    CERN Document Server

    Subrahmanian, VS

    2012-01-01

    Terrorist groups throughout the world have been studied primarily through the use of social science methods. However, major advances in IT during the past decade have led to significant new ways of studying terrorist groups, making forecasts, learning models of their behaviour, and shaping policies about their behaviour. Handbook of Computational Approaches to Counterterrorism provides the first in-depth look at how advanced mathematics and modern computing technology is shaping the study of terrorist groups. This book includes contributions from world experts in the field, and presents extens

  11. Efficient and Accurate Computational Framework for Injector Design and Analysis Project

    Data.gov (United States)

    National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...

  12. A novel approach to accurate portal dosimetry using CCD-camera based EPIDs

    International Nuclear Information System (INIS)

    A new method for portal dosimetry using CCD camera-based electronic portal imaging devices (CEPIDs) is demonstrated. Unlike previous approaches, it is not based on a priori assumptions concerning CEPID cross-talk characteristics. In this method, the nonsymmetrical and position-dependent cross-talk is determined by directly imaging a set of cross-talk kernels generated by small fields ('pencil beams') exploiting the high signal-to-noise ratio of a cooled CCD camera. Signal calibration is achieved by imaging two reference fields. Next, portal dose images (PDIs) can be derived from electronic portal dose images (EPIs), in a fast forward-calculating iterative deconvolution. To test the accuracy of these EPI-based PDIs, a comparison is made to PDIs obtained by scanning diode measurements. The method proved accurate to within 0.2±0.7% (1 SD), for on-axis symmetrical and asymmetrical fields with different field widths and homogeneous phantom thicknesses, off-axis Alderson thorax fields and a strongly modulated IMRT field. Hence, the proposed method allows for fast, accurate portal dosimetry. In addition, it is demonstrated that the CEPID cross-talk signal is not only induced by optical photon reflection and scatter within the CEPID structure, but also by high-energy back-scattered radiation from CEPID elements (mirror and housing) towards the fluorescent screen

  13. Effective and accurate approach for modeling of commensurate-incommensurate transition in krypton monolayer on graphite.

    Science.gov (United States)

    Ustinov, E A

    2014-10-01

    Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.

  14. A Maximum-Entropy approach for accurate document annotation in the biomedical domain.

    Science.gov (United States)

    Tsatsaronis, George; Macari, Natalia; Torge, Sunna; Dietze, Heiko; Schroeder, Michael

    2012-01-01

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH).The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm's performance is resilient to terms' ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms.

  15. Accurate characterization of mask defects by combination of phase retrieval and deterministic approach

    Science.gov (United States)

    Park, Min-Chul; Leportier, Thibault; Kim, Wooshik; Song, Jindong

    2016-06-01

    In this paper, we present a method to characterize not only shape but also depth of defects in line and space mask patterns. Features in a mask are too fine for conventional imaging system to resolve them and coherent imaging system providing only the pattern diffracted by the mask are used. Then, phase retrieval methods may be applied, but the accuracy it too low to determine the exact shape of the defect. Deterministic methods have been proposed to characterize accurately the defect, but it requires a reference pattern. We propose to use successively phase retrieval algorithm to retrieve the general shape of the mask and then deterministic approach to characterize precisely the defects detected.

  16. An accurate, fast, mathematically robust, universal, non-iterative algorithm for computing multi-component diffusion velocities

    CERN Document Server

    Ambikasaran, Sivaram

    2015-01-01

    Using accurate multi-component diffusion treatment in numerical combustion studies remains formidable due to the computational cost associated with solving for diffusion velocities. To obtain the diffusion velocities, for low density gases, one needs to solve the Stefan-Maxwell equations along with the zero diffusion flux criteria, which scales as $\\mathcal{O}(N^3)$, when solved exactly. In this article, we propose an accurate, fast, direct and robust algorithm to compute multi-component diffusion velocities. To our knowledge, this is the first provably accurate algorithm (the solution can be obtained up to an arbitrary degree of precision) scaling at a computational complexity of $\\mathcal{O}(N)$ in finite precision. The key idea involves leveraging the fact that the matrix of the reciprocal of the binary diffusivities, $V$, is low rank, with its rank being independent of the number of species involved. The low rank representation of matrix $V$ is computed in a fast manner at a computational complexity of $\\...

  17. Accurate and interpretable nanoSAR models from genetic programming-based decision tree construction approaches.

    Science.gov (United States)

    Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z

    2016-09-01

    The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models. PMID:26956430

  18. Advanced computational approaches to biomedical engineering

    CERN Document Server

    Saha, Punam K; Basu, Subhadip

    2014-01-01

    There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig

  19. Computational Approaches to Nucleic Acid Origami.

    Science.gov (United States)

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms. PMID:26348196

  20. A Novel PCR-Based Approach for Accurate Identification of Vibrio parahaemolyticus.

    Science.gov (United States)

    Li, Ruichao; Chiou, Jiachi; Chan, Edward Wai-Chi; Chen, Sheng

    2016-01-01

    A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the bla CARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that bla CARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this bla CARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of bla CARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by bla CARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification. PMID:26858713

  1. Fast and Accurate Computation Tools for Gravitational Waveforms from Binary Sistems with any Orbital Eccentricity

    CERN Document Server

    Pierro, V; Spallicci, A D; Laserra, E; Recano, F

    2001-01-01

    The relevance of orbital eccentricity in the detection of gravitational radiation from (steady state) binary stars is emphasized. Computationnally effective fast and accurate)tools for constructing gravitational wave templates from binary stars with any orbital eccentricity are introduced, including tight estimation criteria of the pertinent truncation and approximation errors.

  2. Introducing Computational Approaches in Intermediate Mechanics

    Science.gov (United States)

    Cook, David M.

    2006-12-01

    In the winter of 2003, we at Lawrence University moved Lagrangian mechanics and rigid body dynamics from a required sophomore course to an elective junior/senior course, freeing 40% of the time for computational approaches to ordinary differential equations (trajectory problems, the large amplitude pendulum, non-linear dynamics); evaluation of integrals (finding centers of mass and moment of inertia tensors, calculating gravitational potentials for various sources); and finding eigenvalues and eigenvectors of matrices (diagonalizing the moment of inertia tensor, finding principal axes), and to generating graphical displays of computed results. Further, students begin to use LaTeX to prepare some of their submitted problem solutions. Placed in the middle of the sophomore year, this course provides the background that permits faculty members as appropriate to assign computer-based exercises in subsequent courses. Further, students are encouraged to use our Computational Physics Laboratory on their own initiative whenever that use seems appropriate. (Curricular development supported in part by the W. M. Keck Foundation, the National Science Foundation, and Lawrence University.)

  3. Computer Forensics Education - the Open Source Approach

    Science.gov (United States)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  4. Fast and Accurate Computation of Orbital Collision Probability for Short-Term Encounters

    OpenAIRE

    Serra, Romain; Arzelier, Denis; Joldes, Mioara; Lasserre, Jean-Bernard,; Rondepierre, Aude; Salvy, Bruno

    2016-01-01

    International audience; This article provides a new method for computing the probability of collision between two spherical space objects involved in a short-term encounter under Gaussian-distributed uncertainty. In this model of conjunction, classical assumptions reduce the probability of collision to the integral of a two-dimensional Gaussian probability density function over a disk. The computational method presented here is based on an analytic expression for the integral, derived by use ...

  5. Computational approaches to analogical reasoning current trends

    CERN Document Server

    Richard, Gilles

    2014-01-01

    Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...

  6. Interacting electrons theory and computational approaches

    CERN Document Server

    Martin, Richard M; Ceperley, David M

    2016-01-01

    Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. Many-body methods are becoming essential tools vital for quantitative calculations and understanding materials phenomena in physics, chemistry, materials science and other fields. This book provides a unified exposition of the most-used tools: many-body perturbation theory, dynamical mean field theory and quantum Monte Carlo simulations. Each topic is introduced with a less technical overview for a broad readership, followed by in-depth descriptions and mathematical formulation. Practical guidelines, illustrations and exercises are chosen to enable readers to appreciate the complementary approaches, their relationships, and the advantages and disadvantages of each method. This book is designed for graduate students and researchers who want to use and understand these advanced computational tools, get a broad overview, and acquire a basis for participating in new developments.

  7. Accurate and Efficient Computations of the Greeks for Options Near Expiry Using the Black-Scholes Equations

    Directory of Open Access Journals (Sweden)

    Darae Jeong

    2016-01-01

    Full Text Available We investigate the accurate computations for the Greeks using the numerical solutions of the Black-Scholes partial differential equation. In particular, we study the behaviors of the Greeks close to the maturity time and in the neighborhood around the strike price. The Black-Scholes equation is discretized using a nonuniform finite difference method. We propose a new adaptive time-stepping algorithm based on local truncation error. As a test problem for our numerical method, we consider a European cash-or-nothing call option. To show the effect of the adaptive stepping strategy, we calculate option price and its Greeks with various tolerances. Several numerical results confirm that the proposed method is fast, accurate, and practical in computing option price and the Greeks.

  8. Is the Separable Propagator Perturbation Approach Accurate in Calculating Angle Resolved Photoelectron Diffraction Spectra?

    Science.gov (United States)

    Ng, C. N.; Chu, T. P.; Wu, Huasheng; Tong, S. Y.; Huang, Hong

    1997-03-01

    We compare multiple scattering results of angle-resolved photoelectron diffraction spectra between the exact slab method and the separable propagator perturbation method. In the slab method,footnote C.H. Li, A.R. Lubinsky and S.Y. Tong, Phys. Rev. B17, 3128 (1978). the source wave and multiple scattering within the strong scattering atomic layers are expanded in spherical waves while interlayer scattering is expressed in plane waves. The transformation between spherical waves and plane waves is done exactly. The plane waves are then matched across the solid-vacuum interface to a single outgoing plane wave in the detector's direction. The separable propagator perturbation approach uses two approximations: (i) A separable representation of the Green's function propagator and (ii) A perturbation expansion of multiple scattering terms. Results of c(2x2) S-Ni(001) show that this approximate method fails to converge due to the very slow convergence of the separable representation for scattering angles less than 90^circ. However, this method is accurate in the backscattering regime and may be applied to XAFS calculations.(J.J. Rehr and R.C. Albers, Phys. Rev. B41, 8139 (1990).) The use of this method for angle-resolved photoelectron diffraction spectra is substantially less reliable.

  9. An efficient and accurate method for computation of energy release rates in beam structures with longitudinal cracks

    DEFF Research Database (Denmark)

    Blasques, José Pedro Albergaria Amaral; Bitsche, Robert

    2015-01-01

    Crack Closure Technique is used for computation of strain energy release rates. The devised framework was employed for analysis of cracks in beams with different cross section geometries. The results show that the accuracy of the proposed method is comparable to that of conventional three......This paper proposes a novel, efficient, and accurate framework for fracture analysis of beam structures with longitudinal cracks. The three-dimensional local stress field is determined using a high-fidelity beam model incorporating a finite element based cross section analysis tool. The Virtual......-dimensional solid finite element models while using only a fraction of the computation time....

  10. Sculpting the band gap: a computational approach.

    Science.gov (United States)

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D A

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  11. Accurate and Efficient Computations of the Greeks for Options Near Expiry Using the Black-Scholes Equations

    OpenAIRE

    Darae Jeong; Minhyun Yoo; Junseok Kim

    2016-01-01

    We investigate the accurate computations for the Greeks using the numerical solutions of the Black-Scholes partial differential equation. In particular, we study the behaviors of the Greeks close to the maturity time and in the neighborhood around the strike price. The Black-Scholes equation is discretized using a nonuniform finite difference method. We propose a new adaptive time-stepping algorithm based on local truncation error. As a test problem for our numerical method, we consider a Eur...

  12. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    International Nuclear Information System (INIS)

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm

  13. Fast and accurate earthquake location within complex medium using a hybrid global-local inversion approach

    Institute of Scientific and Technical Information of China (English)

    Chaoying Bai; Rui Zhao; Stewart Greenhalgh

    2009-01-01

    A novel hybrid approach for earthquake location is proposed which uses a combined coarse global search and fine local inversion with a minimum search routine, plus an examination of the root mean squares (RMS) error distribution. The method exploits the advantages of network ray tracing and robust formulation of the Frechet derivatives to simultaneously update all possible initial source parameters around most local minima (including the global minimum) in the solution space, and finally to determine the likely global solution. Several synthetic examples involving a 3-D complex velocity model and a challenging source-receiver layout are used to demonstrate the capability of the newly-developed method. This new global-local hybrid solution technique not only incorporates the significant benefits of our recently published hypocenter determination procedure for multiple earthquake parameters, but also offers the attractive features of global optimal searching in the RMS travel time error distribution. Unlike the traditional global search method, for example, the Monte Carlo approach, where millions of tests have to be done to find the final global solution, the new method only conducts a matrix inversion type local search but does it multiple times simultaneously throughout the model volume to seek a global solution. The search is aided by inspection of the RMS error distribution. Benchmark tests against two popular approaches, the direct grid search method and the oct-tree important sampling method, indicate that the hybrid global-local inversion yields comparable location accuracy and is not sensitive to modest level of noise data, but more importantly it offers two-order of magnitude speed-up in computational effort. Such an improvement, combined with high accuracy, make it a promising hypocenter determination scheme in earthquake early warning, tsunami early warning, rapid hazard assessment and emergency response after strong earthquake occurrence.

  14. Computer subroutine ISUDS accurately solves large system of simultaneous linear algebraic equations

    Science.gov (United States)

    Collier, G.

    1967-01-01

    Computer program, an Iterative Scheme Using a Direct Solution, obtains double precision accuracy using a single-precision coefficient matrix. ISUDS solves a system of equations written in matrix form as AX equals B, where A is a square non-singular coefficient matrix, X is a vector, and B is a vector.

  15. Accurate computation of Galerkin double surface integrals in the 3-D boundary element method

    CERN Document Server

    Adelman, Ross; Duraiswami, Ramani

    2015-01-01

    Many boundary element integral equation kernels are based on the Green's functions of the Laplace and Helmholtz equations in three dimensions. These include, for example, the Laplace, Helmholtz, elasticity, Stokes, and Maxwell's equations. Integral equation formulations lead to more compact, but dense linear systems. These dense systems are often solved iteratively via Krylov subspace methods, which may be accelerated via the fast multipole method. There are advantages to Galerkin formulations for such integral equations, as they treat problems associated with kernel singularity, and lead to symmetric and better conditioned matrices. However, the Galerkin method requires each entry in the system matrix to be created via the computation of a double surface integral over one or more pairs of triangles. There are a number of semi-analytical methods to treat these integrals, which all have some issues, and are discussed in this paper. We present novel methods to compute all the integrals that arise in Galerkin fo...

  16. Accurate computation of surface stresses and forces with immersed boundary methods

    CERN Document Server

    Goza, Andres; Morley, Benjamin; Colonius, Tim

    2016-01-01

    Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is a...

  17. Necessary conditions for accurate computations of three-body partial decay widths

    CERN Document Server

    Garrido, E; Fedorov, D V

    2008-01-01

    The partial width for decay of a resonance into three fragments is largely determined at distances where the energy is smaller than the effective potential producing the corresponding wave function. At short distances the many-body properties are accounted for by preformation or spectroscopic factors. We use the adiabatic expansion method combined with the WKB approximation to obtain the indispensable cluster model wave functions at intermediate and larger distances. We test the concept by deriving conditions for the minimal basis expressed in terms of partial waves and radial nodes. We compare results for different effective interactions and methods. Agreement is found with experimental values for a sufficiently large basis. We illustrate the ideas with realistic examples from $\\alpha$-emission of $^{12}$C and two-proton emission of $^{17}$Ne. Basis requirements for accurate momentum distributions are briefly discussed.

  18. Accurate and efficient computation of the Green's tensor for stratified media

    OpenAIRE

    Gay-Balmaz, P.; Martin, O. J. F.; Paulus, M.

    2000-01-01

    We present a technique for the computation of the Green's tensor in three-dimensional stratified media composed of an arbitrary number of layers with different permittivities and permeabilities (including metals with a complex permittivity). The practical implementation of this technique is discussed in detail. In particular, we show how to efficiently handle the singularities occurring in Sommerfeld integrals, by deforming the integration path in the complex plane. Examples assess the accura...

  19. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Osei-Kuffuor, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fattebert, Jean-Luc [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7x10-4 Ha/Bohr.

  20. An adaptive grid method for computing time accurate solutions on structured grids

    Science.gov (United States)

    Bockelie, Michael J.; Smith, Robert E.; Eiseman, Peter R.

    1991-01-01

    The solution method consists of three parts: a grid movement scheme; an unsteady Euler equation solver; and a temporal coupling routine that links the dynamic grid to the Euler solver. The grid movement scheme is an algebraic method containing grid controls that generate a smooth grid that resolves the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling is performed with a grid prediction correction procedure that is simple to implement and provides a grid that does not lag the solution in time. The adaptive solution method is tested by computing the unsteady inviscid solutions for a one dimensional shock tube and a two dimensional shock vortex iteraction.

  1. Recursive algorithm and accurate computation of dyadic Green's functions for stratified uniaxial anisotropic media

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A recursive algorithm is adopted for the computation of dyadic Green's functions in three-dimensional stratified uniaxial anisotropic media with arbitrary number of layers. Three linear equation groups for computing the coefficients of the Sommerfeld integrals are obtained according to the continuity condition of electric and magnetic fields across the interface between different layers, which are in correspondence with the TM wave produced by a vertical unit electric dipole and the TE or TM wave produced by a horizontal unit electric dipole, respectively. All the linear equation groups can be solved via the recursive algorithm. The dyadic Green's functions with source point and field point being in any layer can be conveniently obtained by merely changing the position of the elements within the source term of the linear equation groups. The problem of singularities occurring in the Sommerfeld integrals is efficiently solved by deforming the integration path in the complex plane. The expression of the dyadic Green's functions provided by this paper is terse in form and is easy to be programmed, and it does not overflow. Theoretical analysis and numerical examples show the accuracy and effectivity of the algorithm.

  2. Novel computational approaches characterizing knee physiotherapy

    Directory of Open Access Journals (Sweden)

    Wangdo Kim

    2014-01-01

    Full Text Available A knee joint’s longevity depends on the proper integration of structural components in an axial alignment. If just one of the components is abnormally off-axis, the biomechanical system fails, resulting in arthritis. The complexity of various failures in the knee joint has led orthopedic surgeons to select total knee replacement as a primary treatment. In many cases, this means sacrificing much of an otherwise normal joint. Here, we review novel computational approaches to describe knee physiotherapy by introducing a new dimension of foot loading to the knee axis alignment producing an improved functional status of the patient. New physiotherapeutic applications are then possible by aligning foot loading with the functional axis of the knee joint during the treatment of patients with osteoarthritis.

  3. Accurate Experiment to Computation Coupling for Understanding QH-mode physics using NIMROD

    Science.gov (United States)

    King, J. R.; Burrell, K. H.; Garofalo, A. M.; Groebner, R. J.; Hanson, J. D.; Hebert, J. D.; Hudson, S. R.; Pankin, A. Y.; Kruger, S. E.; Snyder, P. B.

    2015-11-01

    It is desirable to have an ITER H-mode regime that is quiescent to edge-localized modes (ELMs). The quiescent H-mode (QH-mode) with edge harmonic oscillations (EHO) is one such regime. High quality equilibria are essential for accurate EHO simulations with initial-value codes such as NIMROD. We include profiles outside the LCFS which generate associated currents when we solve the Grad-Shafranov equation with open-flux regions using the NIMEQ solver. The new solution is an equilibrium that closely resembles the original reconstruction (which does not contain open-flux currents). This regenerated equilibrium is consistent with the profiles that are measured by the high quality diagnostics on DIII-D. Results from nonlinear NIMROD simulations of the EHO are presented. The full measured rotation profiles are included in the simulation. The simulation develops into a saturated state. The saturation mechanism of the EHO is explored and simulation is compared to magnetic-coil measurements. This work is currently supported in part by the US DOE Office of Science under awards DE-FC02-04ER54698, DE-AC02-09CH11466 and the SciDAC Center for Extended MHD Modeling.

  4. Accurate Computed Enthalpies of Spin Crossover in Iron and Cobalt Complexes

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta; Cirera, J

    2009-01-01

    Despite their importance in many chemical processes, the relative energies of spin states of transition metal complexes have so far been haunted by large computational errors. By the use of six functionals, B3LYP, BP86, TPSS, TPSSh, M06L, and M06L, this work studies nine complexes (seven with iron...... and two with cobalt) for which experimental enthalpies of spin crossover are available. It is shown that such enthalpies can be used as quantitative benchmarks of a functional's ability to balance electron correlation in both the involved states. TPSSh achieves an unprecedented mean absolute error...... of similar to 11 kJ/mol in spin transition energies, with the local functional M06L a distant second (25 kJ/mol). Other tested functionals give mean absolute errors of 40 kJ/mol or more. This work confirms earlier suggestions that 10% exact exchange is near-optimal for describing the electron correlation...

  5. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    Science.gov (United States)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  6. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    Science.gov (United States)

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  7. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Directory of Open Access Journals (Sweden)

    Shanis Barnard

    Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is

  8. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Science.gov (United States)

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  9. An accurate and scalable O(N) algorithm for First-Principles Molecular Dynamics computations on petascale computers and beyond

    Science.gov (United States)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-03-01

    We present a truly scalable First-Principles Molecular Dynamics algorithm with O(N) complexity and fully controllable accuracy, capable of simulating systems of sizes that were previously impossible with this degree of accuracy. By avoiding global communication, we have extended W. Kohn's condensed matter ``nearsightedness'' principle to a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wavefunctions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 100,000 atoms on 100,000 processors, with a wall-clock time of the order of one minute per molecular dynamics time step. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  10. Blueprinting Approach in Support of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Willem-Jan van den Heuvel

    2012-03-01

    Full Text Available Current cloud service offerings, i.e., Software-as-a-service (SaaS, Platform-as-a-service (PaaS and Infrastructure-as-a-service (IaaS offerings are often provided as monolithic, one-size-fits-all solutions and give little or no room for customization. This limits the ability of Service-based Application (SBA developers to configure and syndicate offerings from multiple SaaS, PaaS, and IaaS providers to address their application requirements. Furthermore, combining different independent cloud services necessitates a uniform description format that facilitates the design, customization, and composition. Cloud Blueprinting is a novel approach that allows SBA developers to easily design, configure and deploy virtual SBA payloads on virtual machines and resource pools on the cloud. We propose the Blueprint concept as a uniform abstract description for cloud service offerings that may cross different cloud computing layers, i.e., SaaS, PaaS and IaaS. To support developers with the SBA design and development in the cloud, this paper introduces a formal Blueprint Template for unambiguously describing a blueprint, as well as a Blueprint Lifecycle that guides developers through the manipulation, composition and deployment of different blueprints for an SBA. Finally, the empirical evaluation of the blueprinting approach within an EC’s FP7 project is reported and an associated blueprint prototype implementation is presented.

  11. Trajectory Evaluation of Rotor-Flying Robots Using Accurate Inverse Computation Based on Algorithm Differentiation

    Directory of Open Access Journals (Sweden)

    Yuqing He

    2014-01-01

    Full Text Available Autonomous maneuvering flight control of rotor-flying robots (RFR is a challenging problem due to the highly complicated structure of its model and significant uncertainties regarding many aspects of the field. As a consequence, it is difficult in many cases to decide whether or not a flight maneuver trajectory is feasible. It is necessary to conduct an analysis of the flight maneuvering ability of an RFR prior to test flight. Our aim in this paper is to use a numerical method called algorithm differentiation (AD to solve this problem. The basic idea is to compute the internal state (i.e., attitude angles and angular rates and input profiles based on predetermined maneuvering trajectory information denoted by the outputs (i.e., positions and yaw angle and their higher-order derivatives. For this purpose, we first present a model of the RFR system and show that it is flat. We then cast the procedure for obtaining the required state/input based on the desired outputs as a static optimization problem, which is solved using AD and a derivative based optimization algorithm. Finally, we test our proposed method using a flight maneuver trajectory to verify its performance.

  12. Highly Accurate Frequency Calculations of Crab Cavities Using the VORPAL Computational Framework

    Energy Technology Data Exchange (ETDEWEB)

    Austin, T.M.; /Tech-X, Boulder; Cary, J.R.; /Tech-X, Boulder /Colorado U.; Bellantoni, L.; /Argonne

    2009-05-01

    We have applied the Werner-Cary method [J. Comp. Phys. 227, 5200-5214 (2008)] for extracting modes and mode frequencies from time-domain simulations of crab cavities, as are needed for the ILC and the beam delivery system of the LHC. This method for frequency extraction relies on a small number of simulations, and post-processing using the SVD algorithm with Tikhonov regularization. The time-domain simulations were carried out using the VORPAL computational framework, which is based on the eminently scalable finite-difference time-domain algorithm. A validation study was performed on an aluminum model of the 3.9 GHz RF separators built originally at Fermi National Accelerator Laboratory in the US. Comparisons with measurements of the A15 cavity show that this method can provide accuracy to within 0.01% of experimental results after accounting for manufacturing imperfections. To capture the near degeneracies two simulations, requiring in total a few hours on 600 processors were employed. This method has applications across many areas including obtaining MHD spectra from time-domain simulations.

  13. Efficient and accurate computation of electric field dyadic Green's function in layered media

    CERN Document Server

    Cho, Min Hyung

    2016-01-01

    Concise and explicit formulas for dyadic Green's functions, representing the electric and magnetic fields due to a dipole source placed in layered media, are derived in this paper. First, the electric and magnetic fields in the spectral domain for the half space are expressed using Fresnel reflection and transmission coefficients. Each component of electric field in the spectral domain constitutes the spectral Green's function in layered media. The Green's function in the spatial domain is then recovered involving Sommerfeld integrals for each component in the spectral domain. By using Bessel identities, the number of Sommerfeld integrals are reduced, resulting in much simpler and more efficient formulas for numerical implementation compared with previous results. This approach is extended to the three-layer Green's function. In addition, the singular part of the Green's function is naturally separated out so that integral equation methods developed for free space Green's functions can be used with minimal mo...

  14. Sampling strategies for accurate computational inferences of gametic phase across highly polymorphic major histocompatibility complex loci

    Directory of Open Access Journals (Sweden)

    Rodríguez Airam

    2011-05-01

    Full Text Available Abstract Background Genes of the Major Histocompatibility Complex (MHC are very popular genetic markers among evolutionary biologists because of their potential role in pathogen confrontation and sexual selection. However, MHC genotyping still remains challenging and time-consuming in spite of substantial methodological advances. Although computational haplotype inference has brought into focus interesting alternatives, high heterozygosity, extensive genetic variation and population admixture are known to cause inaccuracies. We have investigated the role of sample size, genetic polymorphism and genetic structuring on the performance of the popular Bayesian PHASE algorithm. To cover this aim, we took advantage of a large database of known genotypes (using traditional laboratory-based techniques at single MHC class I (N = 56 individuals and 50 alleles and MHC class II B (N = 103 individuals and 62 alleles loci in the lesser kestrel Falco naumanni. Findings Analyses carried out over real MHC genotypes showed that the accuracy of gametic phase reconstruction improved with sample size as a result of the reduction in the allele to individual ratio. We then simulated different data sets introducing variations in this parameter to define an optimal ratio. Conclusions Our results demonstrate a critical influence of the allele to individual ratio on PHASE performance. We found that a minimum allele to individual ratio (1:2 yielded 100% accuracy for both MHC loci. Sampling effort is therefore a crucial step to obtain reliable MHC haplotype reconstructions and must be accomplished accordingly to the degree of MHC polymorphism. We expect our findings provide a foothold into the design of straightforward and cost-effective genotyping strategies of those MHC loci from which locus-specific primers are available.

  15. How accurate is image-free computer navigation for hip resurfacing arthroplasty? An anatomical investigation

    International Nuclear Information System (INIS)

    The existing studies concerning image-free navigated implantation of hip resurfacing arthroplasty are based on analysis of the accuracy of conventional biplane radiography. Studies have shown that these measurements in biplane radiography are imprecise and that precision is improved by use of three-dimensional (3D) computer tomography (CT) scans. To date, the accuracy of image-free navigation devices for hip resurfacing has not been investigated using CT scans, and anteversion accuracy has not been assessed at all. Furthermore, no study has tested the reliability of the navigation software concerning the automatically calculated implant position. The purpose of our study was to analyze the accuracy of varus-valgus and anteversion using an image-free hip resurfacing navigation device. The reliability of the software-calculated implant position was also determined. A total of 32 femoral hip resurfacing components were implanted on embalmed human femurs using an image-free navigation device. In all, 16 prostheses were implanted with the proposed position generated by the navigation software; the 16 prostheses were inserted in an optimized valgus position. A 3D CT scan was undertaken before and after operation. The difference between the measured and planned varus-valgus angle averaged 1 deg (mean±standard deviation (SD): group I, 1 deg±2 deg; group II, 1 deg±1 deg). The mean±SD difference between femoral neck anteversion and anteversion of the implant was 4 deg (group I, 4 deg±4 deg; group II, 4 deg±3 deg). The software-calculated implant position differed 7 deg±8 deg from the measured neck-shaft angle. These measured accuracies did not differ significantly between the two groups. Our study proved the high accuracy of the navigation device concerning the most important biomechanical factor: the varus-valgus angle. The software calculation of the proposed implant position has been shown to be inaccurate and needs improvement. Hence, manual adjustment of the

  16. Accurate computations of the structures and binding energies of the imidazole⋯benzene and pyrrole⋯benzene complexes

    Energy Technology Data Exchange (ETDEWEB)

    Ahnen, Sandra; Hehn, Anna-Sophia [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Vogiatzis, Konstantinos D. [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede-Straße 1a, D-76131 Karlsruhe (Germany); Trachsel, Maria A.; Leutwyler, Samuel [Department of Chemistry and Biochemistry, University of Bern, Freiestrasse 3, CH-3012 Bern (Switzerland); Klopper, Wim, E-mail: klopper@kit.edu [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede-Straße 1a, D-76131 Karlsruhe (Germany)

    2014-09-30

    Highlights: • We have computed accurate binding energies of two NH⋯π hydrogen bonds. • We compare to results from dispersion-corrected density-functional theory. • A double-hybrid functional with explicit correlation has been proposed. • First results of explicitly-correlated ring-coupled-cluster theory are presented. • A double-hybrid functional with random-phase approximation is investigated. - Abstract: Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯benzene and pyrrole⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller–Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation.

  17. Accurate treatments of electrostatics for computer simulations of biological systems: A brief survey of developments and existing problems

    Science.gov (United States)

    Yi, Sha-Sha; Pan, Cong; Hu, Zhong-Han

    2015-12-01

    Modern computer simulations of biological systems often involve an explicit treatment of the complex interactions among a large number of molecules. While it is straightforward to compute the short-ranged Van der Waals interaction in classical molecular dynamics simulations, it has been a long-lasting issue to develop accurate methods for the longranged Coulomb interaction. In this short review, we discuss three types of methodologies for the accurate treatment of electrostatics in simulations of explicit molecules: truncation-type methods, Ewald-type methods, and mean-field-type methods. Throughout the discussion, we brief the formulations and developments of these methods, emphasize the intrinsic connections among the three types of methods, and focus on the existing problems which are often associated with the boundary conditions of electrostatics. This brief survey is summarized with a short perspective on future trends along the method developments and applications in the field of biological simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 91127015 and 21522304) and the Open Project from the State Key Laboratory of Theoretical Physics, and the Innovation Project from the State Key Laboratory of Supramolecular Structure and Materials.

  18. Accurate guidance for percutaneous access to a specific target in soft tissues: preclinical study of computer-assisted pericardiocentesis.

    Science.gov (United States)

    Chavanon, O; Barbe, C; Troccaz, J; Carrat, L; Ribuot, C; Noirclerc, M; Maitrasse, B; Blin, D

    1999-06-01

    In the field of percutaneous access to soft tissues, our project was to improve classical pericardiocentesis by performing accurate guidance to a selected target, according to a model of the pericardial effusion acquired through three-dimensional (3D) data recording. Required hardware is an echocardiographic device and a needle, both linked to a 3D localizer, and a computer. After acquiring echographic data, a modeling procedure allows definition of the optimal puncture strategy, taking into consideration the mobility of the heart, by determining a stable region, whatever the period of the cardiac cycle. A passive guidance system is then used to reach the planned target accurately, generally a site in the middle of the stable region. After validation on a dynamic phantom and a feasibility study in dogs, an accuracy and reliability analysis protocol was realized on pigs with experimental pericardial effusion. Ten consecutive successful punctures using various trajectories were performed on eight pigs. Nonbloody liquid was collected from pericardial effusions in the stable region (5 to 9 mm wide) within 10 to 15 minutes from echographic acquisition to drainage. Accuracy of at least 2.5 mm was demonstrated. This study demonstrates the feasibility of computer-assisted pericardiocentesis. Beyond the simple improvement of the current technique, this method could be a new way to reach the heart or a new tool for percutaneous access and image-guided puncture of soft tissues. Further investigation will be necessary before routine human application.

  19. Methods for Computing Accurate Atomic Spin Moments for Collinear and Noncollinear Magnetism in Periodic and Nonperiodic Materials.

    Science.gov (United States)

    Manz, Thomas A; Sholl, David S

    2011-12-13

    The partitioning of electron spin density among atoms in a material gives atomic spin moments (ASMs), which are important for understanding magnetic properties. We compare ASMs computed using different population analysis methods and introduce a method for computing density derived electrostatic and chemical (DDEC) ASMs. Bader and DDEC ASMs can be computed for periodic and nonperiodic materials with either collinear or noncollinear magnetism, while natural population analysis (NPA) ASMs can be computed for nonperiodic materials with collinear magnetism. Our results show Bader, DDEC, and (where applicable) NPA methods give similar ASMs, but different net atomic charges. Because they are optimized to reproduce both the magnetic field and the chemical states of atoms in a material, DDEC ASMs are especially suitable for constructing interaction potentials for atomistic simulations. We describe the computation of accurate ASMs for (a) a variety of systems using collinear and noncollinear spin DFT, (b) highly correlated materials (e.g., magnetite) using DFT+U, and (c) various spin states of ozone using coupled cluster expansions. The computed ASMs are in good agreement with available experimental results for a variety of periodic and nonperiodic materials. Examples considered include the antiferromagnetic metal organic framework Cu3(BTC)2, several ozone spin states, mono- and binuclear transition metal complexes, ferri- and ferro-magnetic solids (e.g., Fe3O4, Fe3Si), and simple molecular systems. We briefly discuss the theory of exchange-correlation functionals for studying noncollinear magnetism. A method for finding the ground state of systems with highly noncollinear magnetism is introduced. We use these methods to study the spin-orbit coupling potential energy surface of the single molecule magnet Fe4C40H52N4O12, which has highly noncollinear magnetism, and find that it contains unusual features that give a new interpretation to experimental data.

  20. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    Science.gov (United States)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  1. Human Computer Interaction: An intellectual approach

    OpenAIRE

    Mr. Kuntal Saroha; Sheela Sharma; Gurpreet Bhatia

    2011-01-01

    This paper discusses the research that has been done in thefield of Human Computer Interaction (HCI) relating tohuman psychology. Human-computer interaction (HCI) isthe study of how people design, implement, and useinteractive computer systems and how computers affectindividuals, organizations, and society. This encompassesnot only ease of use but also new interaction techniques forsupporting user tasks, providing better access toinformation, and creating more powerful forms ofcommunication. ...

  2. An accurate scheme to solve cluster dynamics equations using a Fokker-Planck approach

    CERN Document Server

    Jourdan, Thomas; Legoll, Frédéric; Monasse, Laurent

    2016-01-01

    We present a numerical method to accurately simulate particle size distributions within the formalism of rate equation cluster dynamics. This method is based on a discretization of the associated Fokker-Planck equation. We show that particular care has to be taken to discretize the advection part of the Fokker-Planck equation, in order to avoid distortions of the distribution due to numerical diffusion. For this purpose we use the Kurganov-Noelle-Petrova scheme coupled with the monotonicity-preserving reconstruction MP5, which leads to very accurate results. The interest of the method is highlighted on the case of loop coarsening in aluminum. We show that the choice of the models to describe the energetics of loops does not significantly change the normalized loop distribution, while the choice of the models for the absorption coefficients seems to have a significant impact on it.

  3. Physics and computer science: quantum computation and other approaches

    OpenAIRE

    Salvador E. Venegas-Andraca

    2011-01-01

    This is a position paper written as an introduction to the special volume on quantum algorithms I edited for the journal Mathematical Structures in Computer Science (Volume 20 - Special Issue 06 (Quantum Algorithms), 2010).

  4. A semantic-web approach for modeling computing infrastructures

    NARCIS (Netherlands)

    M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat

    2013-01-01

    This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w

  5. A computational approach to chemical etiologies of diabetes

    DEFF Research Database (Denmark)

    Audouze, Karine Marie Laure; Brunak, Søren; Grandjean, Philippe

    2013-01-01

    Computational meta-analysis can link environmental chemicals to genes and proteins involved in human diseases, thereby elucidating possible etiologies and pathogeneses of non-communicable diseases. We used an integrated computational systems biology approach to examine possible pathogenetic...

  6. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  7. Highly accurate and efficient self-force computations using time-domain methods: Error estimates, validation, and optimization

    CERN Document Server

    Thornburg, Jonathan

    2010-01-01

    If a small "particle" of mass $\\mu M$ (with $\\mu \\ll 1$) orbits a Schwarzschild or Kerr black hole of mass $M$, the particle is subject to an $\\O(\\mu)$ radiation-reaction "self-force". Here I argue that it's valuable to compute this self-force highly accurately (relative error of $\\ltsim 10^{-6}$) and efficiently, and I describe techniques for doing this and for obtaining and validating error estimates for the computation. I use an adaptive-mesh-refinement (AMR) time-domain numerical integration of the perturbation equations in the Barack-Ori mode-sum regularization formalism; this is efficient, yet allows easy generalization to arbitrary particle orbits. I focus on the model problem of a scalar particle in a circular geodesic orbit in Schwarzschild spacetime. The mode-sum formalism gives the self-force as an infinite sum of regularized spherical-harmonic modes $\\sum_{\\ell=0}^\\infty F_{\\ell,\\reg}$, with $F_{\\ell,\\reg}$ (and an "internal" error estimate) computed numerically for $\\ell \\ltsim 30$ and estimated ...

  8. Arthroscopically assisted Latarjet procedure: A new surgical approach for accurate coracoid graft placement and compression

    Directory of Open Access Journals (Sweden)

    Ettore Taverna

    2013-01-01

    Full Text Available The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.

  9. Arthroscopically assisted Latarjet procedure: A new surgical approach for accurate coracoid graft placement and compression.

    Science.gov (United States)

    Taverna, Ettore; Ufenast, Henri; Broffoni, Laura; Garavaglia, Guido

    2013-07-01

    The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.

  10. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    Science.gov (United States)

    Prinsloo, Jaco; Malekian, Reza

    2016-06-04

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.

  11. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    Science.gov (United States)

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  12. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    Science.gov (United States)

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  13. A chemical approach to accurately characterize the coverage rate of gold nanoparticles

    International Nuclear Information System (INIS)

    Gold nanoparticles (AuNPs) have been widely used in many areas, and the nanoparticles usually have to be functionalized with some molecules before use. However, the information about the characterization of the functionalization of the nanoparticles is still limited or unclear, which has greatly restricted the better functionalization and application of AuNPs. Here, we propose a chemical way to accurately characterize the functionalization of AuNPs. Unlike the traditional physical methods, this method, which is based on the catalytic property of AuNPs, may give accurate coverage rate and some derivative information about the functionalization of the nanoparticles with different kinds of molecules. The performance of the characterization has been approved by adopting three independent molecules to functionalize AuNPs, including both covalent and non-covalent functionalization. Some interesting results are thereby obtained, and some are the first time to be revealed. The method may also be further developed as a useful tool for the characterization of a solid surface

  14. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    Directory of Open Access Journals (Sweden)

    Jaco Prinsloo

    2016-06-01

    Full Text Available Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID technology in combination with GPS and the Global system for Mobile communication (GSM technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz. The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.

  15. Practically acquired and modified cone-beam computed tomography images for accurate dose calculation in head and neck cancer

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Chih-Chung [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; Yuanpei Univ., Hsinchu (China). Dept. of Radiological Technology; Huang, Wen-Tao [Yuanpei Univ., Hsinchu (China). Dept. of Radiological Technology; Tsai, Chiao-Ling; Chao, Hsiao-Ling; Huang, Guo-Ming; Wang, Chun-Wei [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; Wu, Jian-Kuen [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; National Taiwan Normal Univ., Taipei (China). Inst. of Electro-Optical Science and Technology; Wu, Chien-Jang [National Taiwan Normal Univ., Taipei (China). Inst. of Electro-Optical Science and Technology; Cheng, Jason Chia-Hsien [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; National Taiwan Univ. Taipei (China). Graduate Inst. of Oncology; National Taiwan Univ. Taipei (China). Graduate Inst. of Clinical Medicine; National Taiwan Univ. Taipei (China). Graduate Inst. of Biomedical Electronics and Bioinformatics

    2011-10-15

    On-line cone-beam computed tomography (CBCT) may be used to reconstruct the dose for geometric changes of patients and tumors during radiotherapy course. This study is to establish a practical method to modify the CBCT for accurate dose calculation in head and neck cancer. Fan-beam CT (FBCT) and Elekta's CBCT were used to acquire images. The CT numbers for different materials on CBCT were mathematically modified to match them with FBCT. Three phantoms were scanned by FBCT and CBCT for image uniformity, spatial resolution, and CT numbers, and to compare the dose distribution from orthogonal beams. A Rando phantom was scanned and planned with intensity-modulated radiation therapy (IMRT). Finally, two nasopharyngeal cancer patients treated with IMRT had their CBCT image sets calculated for dose comparison. With 360 acquisition of CBCT and high-resolution reconstruction, the uniformity of CT number distribution was improved and the otherwise large variations for background and high-density materials were reduced significantly. The dose difference between FBCT and CBCT was < 2% in phantoms. In the Rando phantom and the patients, the dose-volume histograms were similar. The corresponding isodose curves covering {>=} 90% of prescribed dose on FBCT and CBCT were close to each other (within 2 mm). Most dosimetric differences were from the setup errors related to the interval changes in body shape and tumor response. The specific CBCT acquisition, reconstruction, and CT number modification can generate accurate dose calculation for the potential use in adaptive radiotherapy.

  16. Computer science approach to quantum control

    OpenAIRE

    Janzing, Dominik

    2006-01-01

    This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. It shows that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer.

  17. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  18. Accurate Waveforms for Non-spinning Binary Black Holes using the Effective-one-body Approach

    Science.gov (United States)

    Buonanno, Alessandra; Pan, Yi; Baker, John G.; Centrella, Joan; Kelly, Bernard J.; McWilliams, Sean T.; vanMeter, James R.

    2007-01-01

    Using numerical relativity as guidance and the natural flexibility of the effective-one-body (EOB) model, we extend the latter so that it can successfully match the numerical relativity waveforms of non-spinning binary black holes during the last stages of inspiral, merger and ringdown. Here, by successfully, we mean with phase differences black-hole masses. The final black-hole mass and spin predicted by the numerical simulations are used to determine the ringdown frequency and decay time of three quasi-normal-mode damped sinusoids that are attached to the EOB inspiral-(plunge) waveform at the light-ring. The accurate EOB waveforms may be employed for coherent searches of gravitational waves emitted by non-spinning coalescing binary black holes with ground-based laser-interferometer detectors.

  19. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    Science.gov (United States)

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-01

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397

  20. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  1. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS

    Science.gov (United States)

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-01-01

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154

  2. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS

    Directory of Open Access Journals (Sweden)

    Zhibin Miao

    2015-12-01

    Full Text Available With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.

  3. Assessing creativity in computer music ensembles: a computational approach

    OpenAIRE

    Comajuncosas, Josep M.

    2016-01-01

    Over the last decade Laptop Orchestras and Mobile Ensembles have proliferated. As a result, a large body of research has arisen on infrastructure, evaluation, design principles and compositional methodologies for Computer Music Ensembles (CME). However, little has been addressed and very little is known about the challenges and opportunities provided by CMEs for creativity in musical performance. Therefore, one of the most common issues CMEs have to deal with is the lack of ...

  4. Human Computer Interaction: An intellectual approach

    Directory of Open Access Journals (Sweden)

    Kuntal Saroha

    2011-08-01

    Full Text Available This paper discusses the research that has been done in thefield of Human Computer Interaction (HCI relating tohuman psychology. Human-computer interaction (HCI isthe study of how people design, implement, and useinteractive computer systems and how computers affectindividuals, organizations, and society. This encompassesnot only ease of use but also new interaction techniques forsupporting user tasks, providing better access toinformation, and creating more powerful forms ofcommunication. It involves input and output devices andthe interaction techniques that use them; how information ispresented and requested; how the computer’s actions arecontrolled and monitored; all forms of help, documentation,and training; the tools used to design, build, test, andevaluate user interfaces; and the processes that developersfollow when creating Interfaces.

  5. An efficient and accurate approach to MTE-MART for time-resolved tomographic PIV

    Science.gov (United States)

    Lynch, K. P.; Scarano, F.

    2015-03-01

    The motion-tracking-enhanced MART (MTE-MART; Novara et al. in Meas Sci Technol 21:035401, 2010) has demonstrated the potential to increase the accuracy of tomographic PIV by the combined use of a short sequence of non-simultaneous recordings. A clear bottleneck of the MTE-MART technique has been its computational cost. For large datasets comprising time-resolved sequences, MTE-MART becomes unaffordable and has been barely applied even for the analysis of densely seeded tomographic PIV datasets. A novel implementation is proposed for tomographic PIV image sequences, which strongly reduces the computational burden of MTE-MART, possibly below that of regular MART. The method is a sequential algorithm that produces a time-marching estimation of the object intensity field based on an enhanced guess, which is built upon the object reconstructed at the previous time instant. As the method becomes effective after a number of snapshots (typically 5-10), the sequential MTE-MART (SMTE) is most suited for time-resolved sequences. The computational cost reduction due to SMTE simply stems from the fewer MART iterations required for each time instant. Moreover, the method yields superior reconstruction quality and higher velocity field measurement precision when compared with both MART and MTE-MART. The working principle is assessed in terms of computational effort, reconstruction quality and velocity field accuracy with both synthetic time-resolved tomographic images of a turbulent boundary layer and two experimental databases documented in the literature. The first is the time-resolved data of flow past an airfoil trailing edge used in the study of Novara and Scarano (Exp Fluids 52:1027-1041, 2012); the second is a swirling jet in a water flow. In both cases, the effective elimination of ghost particles is demonstrated in number and intensity within a short temporal transient of 5-10 frames, depending on the seeding density. The increased value of the velocity space

  6. Ring polymer molecular dynamics fast computation of rate coefficients on accurate potential energy surfaces in local configuration space: Application to the abstraction of hydrogen from methane

    Science.gov (United States)

    Meng, Qingyong; Chen, Jun; Zhang, Dong H.

    2016-04-01

    To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3 reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ˜20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.

  7. Computer science approach to quantum control

    Energy Technology Data Exchange (ETDEWEB)

    Janzing, D.

    2006-07-01

    Whereas it is obvious that every computation process is a physical process it has hardly been recognized that many complex physical processes bear similarities to computation processes. This is in particular true for the control of physical systems on the nanoscopic level: usually the system can only be accessed via a rather limited set of elementary control operations and for many purposes only a concatenation of a large number of these basic operations will implement the desired process. This concatenation is in many cases quite similar to building complex programs from elementary steps and principles for designing algorithm may thus be a paradigm for designing control processes. For instance, one can decrease the temperature of one part of a molecule by transferring its heat to the remaining part where it is then dissipated to the environment. But the implementation of such a process involves a complex sequence of electromagnetic pulses. This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. We show that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer. Likewise, the implementation of a heat engine on the nanoscale requires to process the heat in a way that is similar to information processing and it can be shown that heat engines with maximal efficiency would be powerful computers, too. In the same way as problems in computer science can be classified by complexity classes we can also classify control problems according to their complexity. Moreover, we directly relate these complexity classes for control problems to the classes in computer science. Unifying notions of complexity in computer science and physics has therefore two aspects: on the one hand, computer science methods help to analyze the complexity of physical processes. On the other hand, reasonable

  8. Uncertainty in biology: a computational modeling approach

    OpenAIRE

    2015-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building...

  9. Accurate Simulation of Resonance-Raman Spectra of Flexible Molecules: An Internal Coordinates Approach.

    Science.gov (United States)

    Baiardi, Alberto; Bloino, Julien; Barone, Vincenzo

    2015-07-14

    The interpretation and analysis of experimental resonance-Raman (RR) spectra can be significantly facilitated by vibronic computations based on reliable quantum-mechanical (QM) methods. With the aim of improving the description of large and flexible molecules, our recent time-dependent formulation to compute vibrationally resolved electronic spectra, based on Cartesian coordinates, has been extended to support internal coordinates. A set of nonredundant delocalized coordinates is automatically generated from the molecular connectivity thanks to a new general and robust procedure. In order to validate our implementation, a series of molecules has been used as test cases. Among them, rigid systems show that normal modes based on Cartesian and delocalized internal coordinates provide equivalent results, but the latter set is much more convenient and reliable for systems characterized by strong geometric deformations associated with the electronic transition. The so-called Z-matrix internal coordinates, which perform well for chain molecules, are also shown to be poorly suited in the presence of cycles or nonstandard structures.

  10. A simple and surprisingly accurate approach to the chemical bond obtained from dimensional scaling

    CERN Document Server

    Svidzinsky, A A; Scully, M O

    2005-01-01

    We present a new dimensional scaling transformation of the Schrodinger equation for the two electron bond. This yields, for the first time, a good description of the two electron bond via D-scaling. There also emerges, in the large-D limit, an intuitively appealing semiclassical picture, akin to a molecular model proposed by Niels Bohr in 1913. In this limit, the electrons are confined to specific orbits in the scaled space, yet the uncertainty principle is maintained because the scaling leaves invariant the position-momentum commutator. A first-order perturbation correction, proportional to 1/D, substantially improves the agreement with the exact ground state potential energy curve. The present treatment is very simple mathematically, yet provides a strikingly accurate description of the potential energy curves for the lowest singlet, triplet and excited states of H_2. We find the modified D-scaling method also gives good results for other molecules. It can be combined advantageously with Hartree-Fock and ot...

  11. Mobile Cloud Computing: A Review on Smartphone Augmentation Approaches

    OpenAIRE

    Abolfazli, Saeid; Sanaei, Zohreh; Gani, Abdullah

    2012-01-01

    Smartphones have recently gained significant popularity in heavy mobile processing while users are increasing their expectations toward rich computing experience. However, resource limitations and current mobile computing advancements hinder this vision. Therefore, resource-intensive application execution remains a challenging task in mobile computing that necessitates device augmentation. In this article, smartphone augmentation approaches are reviewed and classified in two main groups, name...

  12. Effective approach for accurately calculating individual energy of polar heterojunction interfaces

    Science.gov (United States)

    Akiyama, Toru; Nakane, Harunobu; Nakamura, Kohji; Ito, Tomonori

    2016-09-01

    We propose a direct approach for calculating individual energy of polar semiconductor interfaces using density functional theory calculations. This approach is applied to polar interfaces between group-III nitrides (AlN and GaN) and SiC and clarifies the interplay of chemical bonding and charge neutrality at the interface, which is crucial for the stability and polarity of group-III nitrides on SiC substrates. The ideal interface is stabilized among various atomic arrangements over the wide range of the chemical potential on Si-face SiC, whereas those with intermixing are favorable on C-face SiC. The stabilization of the ideal interfaces resulting in Ga-polar GaN and Al-polar AlN films on Si-face SiC is consistent with experiments, suggesting that our approach is versatile to evaluate various polar heterojunction interfaces as well as group-III nitrides on semiconductor substrates.

  13. Computational dynamics for robotics systems using a non-strict computational approach

    Science.gov (United States)

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  14. Human brain mapping: Experimental and computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J. [Los Alamos National Lab., NM (US); Sanders, J. [Albuquerque VA Medical Center, NM (US); Belliveau, J. [Massachusetts General Hospital, Boston, MA (US)

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  15. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  16. Development of a computer assisted gantry system for gaining rapid and accurate calyceal access during percutaneous nephrolithotomy

    Directory of Open Access Journals (Sweden)

    A. D. Zarrabi

    2010-12-01

    Full Text Available PURPOSE: To design a simple, cost-effective system for gaining rapid and accurate calyceal access during percutaneous nephrolithotomy (PCNL. MATERIALS AND METHODS: The design consists of a low-cost, light-weight, portable mechanical gantry with a needle guiding device. Using C-arm fluoroscopy, two images of the contrast-filled renal collecting system are obtained: at 0-degrees (perpendicular to the kidney and 20-degrees. These images are relayed to a laptop computer containing the software and graphic user interface for selecting the targeted calyx. The software provides numerical settings for the 3 axes of the gantry, which are used to position the needle guiding device. The needle is advanced through the guide to the depth calculated by the software, thus puncturing the targeted calyx. Testing of the system was performed on 2 target types: 1 radiolucent plastic tubes the approximate size of a renal calyx (5 or 10 mm in diameter, 30 mm in length; and 2 foam-occluded, contrast-filled porcine kidneys. RESULTS: Tests using target type 1 with 10 mm diameter (n = 14 and 5 mm diameter (n = 7 tubes resulted in a 100% targeting success rate, with a mean procedure duration of 10 minutes. Tests using target type 2 (n = 2 were both successful, with accurate puncturing of the selected renal calyx, and a mean procedure duration of 15 minutes. CONCLUSIONS: The mechanical gantry system described in this paper is low-cost, portable, light-weight, and simple to set up and operate. C-arm fluoroscopy is limited to two images, thus reducing radiation exposure significantly. Testing of the system showed an extremely high degree of accuracy in gaining precise access to a targeted renal calyx.

  17. Computational Models of Spreadsheet Development: Basis for Educational Approaches

    CERN Document Server

    Hodnigg, Karin; Mittermeir, Roland T

    2008-01-01

    Among the multiple causes of high error rates in spreadsheets, lack of proper training and of deep understanding of the computational model upon which spreadsheet computations rest might not be the least issue. The paper addresses this problem by presenting a didactical model focussing on cell interaction, thus exceeding the atomicity of cell computations. The approach is motivated by an investigation how different spreadsheet systems handle certain computational issues implied from moving cells, copy-paste operations, or recursion.

  18. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    Science.gov (United States)

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  19. Machine learning and synthetic aperture refocusing approach for more accurate masking of fish bodies in 3D PIV data

    Science.gov (United States)

    Ford, Logan; Bajpayee, Abhishek; Techet, Alexandra

    2015-11-01

    3D particle image velocimetry (PIV) is becoming a popular technique to study biological flows. PIV images that contain fish or other animals around which flow is being studied, need to be appropriately masked in order to remove the animal body from the 3D reconstructed volumes prior to calculating particle displacement vectors. Presented here is a machine learning and synthetic aperture (SA) refocusing based approach for more accurate masking of fish from reconstructed intensity fields for 3D PIV purposes. Using prior knowledge about the 3D shape and appearance of the fish along with SA refocused images at arbitrarily oriented focal planes, the location and orientation of a fish in a reconstructed volume can be accurately determined. Once the location and orientation of a fish in a volume is determined, it can be masked out.

  20. Cluster Computing: A Mobile Code Approach

    Directory of Open Access Journals (Sweden)

    R. B. Patel

    2006-01-01

    Full Text Available Cluster computing harnesses the combined computing power of multiple processors in a parallel configuration. Cluster Computing environments built from commodity hardware have provided a cost-effective solution for many scientific and high-performance applications. In this paper we have presented design and implementation of a cluster based framework using mobile code. The cluster implementation involves the designing of a server named MCLUSTER which manages the configuring, resetting of cluster. It allows a user to provide necessary information regarding the application to be executed via a graphical user interface (GUI. Framework handles- the generation of application mobile code and its distribution to appropriate client nodes, efficient handling of results so generated and communicated by a number of client nodes and recording of execution time of application. The client node receives and executes the mobile code that defines the distributed job submitted by MCLUSTER server and replies the results back. We have also the analyzed the performance of the developed system emphasizing the tradeoff between communication and computation overhead.

  1. Heterogeneous Computing in Economics: A Simplified Approach

    DEFF Research Database (Denmark)

    Dziubinski, Matt P.; Grassi, Stefano

    This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of the C++ Accelerated Massive Parallelism recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (2011) we document...

  2. Computational and mathematical approaches to societal transitions

    NARCIS (Netherlands)

    J.S. Timmermans (Jos); F. Squazzoni (Flaminio); J. de Haan (Hans)

    2008-01-01

    textabstractAfter an introduction of the theoretical framework and concepts of transition studies, this article gives an overview of how structural change in social systems has been studied from various disciplinary perspectives. This overview first leads to the conclusion that computational and mat

  3. A false sense of security? Can tiered approach be trusted to accurately classify immunogenicity samples?

    Science.gov (United States)

    Jaki, Thomas; Allacher, Peter; Horling, Frank

    2016-09-01

    Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. PMID:27262992

  4. Computational Approach To Understanding Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Włodzisław Duch

    2012-01-01

    Full Text Available Every year the prevalence of Autism Spectrum of Disorders (ASD is rising. Is there a unifying mechanism of various ASD cases at the genetic, molecular, cellular or systems level? The hypothesis advanced in this paper is focused on neural dysfunctions that lead to problems with attention in autistic people. Simulations of attractor neural networks performing cognitive functions help to assess system long-term neurodynamics. The Fuzzy Symbolic Dynamics (FSD technique is used for the visualization of attractors in the semantic layer of the neural model of reading. Large-scale simulations of brain structures characterized by a high order of complexity requires enormous computational power, especially if biologically motivated neuron models are used to investigate the influence of cellular structure dysfunctions on the network dynamics. Such simulations have to be implemented on computer clusters in a grid-based architectures

  5. Computational Enzymology, a ReaxFF approach

    DEFF Research Database (Denmark)

    Corozzi, Alessandro

    This PhD project eassay is about the development of a new method to improve our understanding of enzyme catalysis with atomistic details. Currently the theory able to describe chemical systems and their reactivity is quantum mechanics (QM): electronic structure methods that use approximations of QM...... there are ordinary classical models - the molecular mechanics (MM) force-fields - that use newtonian mechanics to describe molecular systems. At this level it is possible to include the entire enzyme system still having light equations but renouncing to an easy modeling of chemical transformation during...... the simulation time. In short: on one hand we have accurate QM methods able to describe reactivity but limited in the size of the system to describe, while on the other hand we have molecular mechanics and ordinary force-fields that are virtually unlimited in size but unable to straightforwardly describe...

  6. Music Genre Classification Systems - A Computational Approach

    OpenAIRE

    Ahrendt, Peter; Hansen, Lars Kai

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular...

  7. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic age

  8. Analysis and accurate reconstruction of incomplete data in X-ray differential phase-contrast computed tomography.

    Science.gov (United States)

    Fu, Jian; Tan, Renbo; Chen, Liyuan

    2014-01-01

    X-ray differential phase-contrast computed tomography (DPC-CT) is a powerful physical and biochemical analysis tool. In practical applications, there are often challenges for DPC-CT due to insufficient data caused by few-view, bad or missing detector channels, or limited scanning angular range. They occur quite frequently because of experimental constraints from imaging hardware, scanning geometry, and the exposure dose delivered to living specimens. In this work, we analyze the influence of incomplete data on DPC-CT image reconstruction. Then, a reconstruction method is developed and investigated for incomplete data DPC-CT. It is based on an algebraic iteration reconstruction technique, which minimizes the image total variation and permits accurate tomographic imaging with less data. This work comprises a numerical study of the method and its experimental verification using a dataset measured at the W2 beamline of the storage ring DORIS III equipped with a Talbot-Lau interferometer. The numerical and experimental results demonstrate that the presented method can handle incomplete data. It will be of interest for a wide range of DPC-CT applications in medicine, biology, and nondestructive testing.

  9. Hydrogen sulfide detection based on reflection: from a poison test approach of ancient China to single-cell accurate localization.

    Science.gov (United States)

    Kong, Hao; Ma, Zhuoran; Wang, Song; Gong, Xiaoyun; Zhang, Sichun; Zhang, Xinrong

    2014-08-01

    With the inspiration of an ancient Chinese poison test approach, we report a rapid hydrogen sulfide detection strategy in specific areas of live cells using silver needles with good spatial resolution of 2 × 2 μm(2). Besides the accurate-localization ability, this reflection-based strategy also has attractive merits of convenience and robust response when free pretreatment and short detection time are concerned. The success of endogenous H2S level evaluation in cellular cytoplasm and nuclear of human A549 cells promises the application potential of our strategy in scientific research and medical diagnosis.

  10. SOFT COMPUTING APPROACH FOR NOISY IMAGE RESTORATION

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A genetic learning algorithm based fuzzy neural network was proposed for noisy image restoration, which can adaptively find and extract the fuzzy rules contained in noise. It can efficiently remove image noise and preserve the detail image information as much as possible. The experimental results show that the proposed approach is able to performa far better than conventional noise removing techniques.

  11. A new approach to accurate validation of remote sensing retrieval of evapotranspiration based on data fusion

    Directory of Open Access Journals (Sweden)

    C. Sun

    2010-03-01

    obtained from RS retrieval, which was in accordance with previous studies (Jamieson, 1982; Dugas and Ainsworth, 1985; Benson et al., 1992; Pereira and Nova, 1992.

    After the data fusion, the correlation (R2=0.8516 between the monthly runoff obtained from the simulation based on ET retrieval and the observed data was higher than that (R2=0.8411 between the data obtained from the PM-based ET simulation and the observed data. As for the RMSE, the result (RMSE=26.0860 between the simulated runoff based on ET retrieval and the observed data was also superior to the result (RMSE=35.71904 between the simulated runoff obtained with PM-based ET and the observed data. As for the MBE parameter, the result (MBE=−8.6578 for the RS retrieval method was obviously better than that (MBE=−22.7313 for the PM-based method. The comparison of them showed that the RS retrieval had better adaptivity and higher accuracy than the PM-based method, and the new approach based on data fusion and the distributed hydrological model was feasible, reliable and worth being studied further.

  12. Computational approaches to homogeneous gold catalysis.

    Science.gov (United States)

    Faza, Olalla Nieto; López, Carlos Silva

    2015-01-01

    Homogenous gold catalysis has been exploding for the last decade at an outstanding pace. The best described reactivity of Au(I) and Au(III) species is based on gold's properties as a soft Lewis acid, but new reactivity patterns have recently emerged which further expand the range of transformations achievable using gold catalysis, with examples of dual gold activation, hydrogenation reactions, or Au(I)/Au(III) catalytic cycles.In this scenario, to develop fully all these new possibilities, the use of computational tools to understand at an atomistic level of detail the complete role of gold as a catalyst is unavoidable. In this work we aim to provide a comprehensive review of the available benchmark works on methodological options to study homogenous gold catalysis in the hope that this effort can help guide the choice of method in future mechanistic studies involving gold complexes. This is relevant because a representative number of current mechanistic studies still use methods which have been reported as inappropriate and dangerously inaccurate for this chemistry.Together with this, we describe a number of recent mechanistic studies where computational chemistry has provided relevant insights into non-conventional reaction paths, unexpected selectivities or novel reactivity, which illustrate the complexity behind gold-mediated organic chemistry.

  13. A complex network approach to cloud computing

    CERN Document Server

    Travieso, Gonzalo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2015-01-01

    Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the users' tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlain by Erdos-Renyi and Barabasi-Albert topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of two indices: the cost of communication between the user and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter index, the ER topology provides better performance than the BA case for smaller average degrees and opposite behavior for larger average degrees. With respect to the cost, smaller values are found in the BA ...

  14. Computing Greeks for L\\'evy Models: The Fourier Transform Approach

    OpenAIRE

    Federico De Olivera; Ernesto Mordecki

    2014-01-01

    The computation of Greeks for exponential L\\'evy models are usually approached by Malliavin Calculus and other methods, as the Likelihood Ratio and the finite difference method. In this paper we obtain exact formulas for Greeks of European options based on the Lewis formula for the option value. Therefore, it is possible to obtain accurate approximations using Fast Fourier Transform. We will present an exhaustive development of Greeks for Call options. The error is shown for all Greeks in the...

  15. Novel Computational Approaches to Drug Discovery

    Science.gov (United States)

    Skolnick, Jeffrey; Brylinski, Michal

    2010-01-01

    New approaches to protein functional inference based on protein structure and evolution are described. First, FINDSITE, a threading based approach to protein function prediction, is summarized. Then, the results of large scale benchmarking of ligand binding site prediction, ligand screening, including applications to HIV protease, and GO molecular functional inference are presented. A key advantage of FINDSITE is its ability to use low resolution, predicted structures as well as high resolution experimental structures. Then, an extension of FINDSITE to ligand screening in GPCRs using predicted GPCR structures, FINDSITE/QDOCKX, is presented. This is a particularly difficult case as there are few experimentally solved GPCR structures. Thus, we first train on a subset of known binding ligands for a set of GPCRs; this is then followed by benchmarking against a large ligand library. For the virtual ligand screening of a number of Dopamine receptors, encouraging results are seen, with significant enrichment in identified ligands over those found in the training set. Thus, FINDSITE and its extensions represent a powerful approach to the successful prediction of a variety of molecular functions.

  16. Securing applications in personal computers: the relay race approach.

    OpenAIRE

    Wright, James Michael

    1991-01-01

    Approved for public release; distribution is unlimited This Thesis reviews the increasing need for security in a personal computer (PC) environment and proposes a new approach for securing PC applications at the application layer. The Relay Race Approach extends two standard approaches : data encryption and password access control at the main program level, to the subprogram level by the use of a special parameter, the "Baton" . The applicability of this approach is de...

  17. A polyhedral approach to computing border bases

    CERN Document Server

    Braun, Gábor

    2009-01-01

    Border bases can be considered to be the natural extension of Gr\\"obner bases that have several advantages. Unfortunately, to date the classical border basis algorithm relies on (degree-compatible) term orderings and implicitly on reduced Gr\\"obner bases. We adapt the classical border basis algorithm to allow for calculating border bases for arbitrary degree-compatible order ideals, which is \\emph{independent} from term orderings. Moreover, the algorithm also supports calculating degree-compatible order ideals with \\emph{preference} on contained elements, even though finding a preferred order ideal is NP-hard. Effectively we retain degree-compatibility only to successively extend our computation degree-by-degree. The adaptation is based on our polyhedral characterization: order ideals that support a border basis correspond one-to-one to integral points of the order ideal polytope. This establishes a crucial connection between the ideal and the combinatorial structure of the associated factor spaces.

  18. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  19. Q-P Wave traveltime computation by an iterative approach

    KAUST Repository

    Ma, Xuxin

    2013-01-01

    In this work, we present a new approach to compute anisotropic traveltime based on solving successively elliptical isotropic traveltimes. The method shows good accuracy and is very simple to implement.

  20. Computing material fronts with a Lagrange-Projection approach

    CERN Document Server

    Chalons, Christophe

    2010-01-01

    This paper reports investigations on the computation of material fronts in multi-fluid models using a Lagrange-Projection approach. Various forms of the Projection step are considered. Particular attention is paid to minimization of conservation errors.

  1. Soft computing approaches to uncertainty propagation in environmental risk mangement

    OpenAIRE

    Kumar, Vikas

    2008-01-01

    Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like pro...

  2. Biologically motivated computationally intensive approaches to image pattern recognition

    NARCIS (Netherlands)

    Petkov, Nikolay

    1995-01-01

    This paper presents some of the research activities of the research group in vision as a grand challenge problem whose solution is estimated to need the power of Tflop/s computers and for which computational methods have yet to be developed. The concerned approaches are biologically motivated, in th

  3. An Approach to Dynamic Provisioning of Social and Computational Services

    NARCIS (Netherlands)

    Bonino da Silva Santos, Luiz Olavo; Sorathia, Vikram; Ferreira Pires, Luis; Sinderen, van Marten

    2010-01-01

    Service-Oriented Computing (SOC) builds upon the intuitive notion of service already known and used in our society for a long time. SOC-related approaches are based on computer-executable functional units that often represent automation of services that exist at the social level, i.e., services at t

  4. A genetic and computational approach to structurally classify neuronal types

    OpenAIRE

    Sümbül, Uygar; Song, Sen; McCulloch, Kyle; Becker, Michael; Lin, Bin; Sanes, Joshua R.; Masland, Richard H.; Seung, H. Sebastian

    2014-01-01

    The importance of cell types in understanding brain function is widely appreciated but only a tiny fraction of neuronal diversity has been catalogued. Here, we exploit recent progress in genetic definition of cell types in an objective structural approach to neuronal classification. The approach is based on highly accurate quantification of dendritic arbor position relative to neurites of other cells. We test the method on a population of 363 mouse retinal ganglion cells. For each cell, we de...

  5. Unbiased QM/MM approach using accurate multipoles from a linear scaling DFT calculation with a systematic basis set

    Science.gov (United States)

    Mohr, Stephan; Genovese, Luigi; Ratcliff, Laura; Masella, Michel

    The quantum mechanics/molecular mechanis (QM/MM) method is a popular approach that allows to perform atomistic simulations using different levels of accuracy. Since only the essential part of the simulation domain is treated using a highly precise (but also expensive) QM method, whereas the remaining parts are handled using a less accurate level of theory, this approach allows to considerably extend the total system size that can be simulated without a notable loss of accuracy. In order to couple the QM and MM regions we use an approximation of the electrostatic potential based on a multipole expansion. The multipoles of the QM region are determined based on the results of a linear scaling Density Functional Theory (DFT) calculation using a set of adaptive, localized basis functions, as implemented within the BigDFT software package. As this determination comes at virtually no extra cost compared to the QM calculation, the coupling between QM and MM region can be done very efficiently. In this presentation I will demonstrate the accuracy of both the linear scaling DFT approach itself as well as of the approximation of the electrostatic potential based on the multipole expansion, and show some first QM/MM applications using the aforementioned approach.

  6. Delay Computation Using Fuzzy Logic Approach

    Directory of Open Access Journals (Sweden)

    Ramasesh G. R.

    2012-10-01

    Full Text Available The paper presents practical application of fuzzy sets and system theory in predicting delay, with reasonable accuracy, a wide range of factors pertaining to construction projects. In this paper we shall use fuzzy logic to predict delays on account of Delayed supplies and Labor shortage. It is observed that the project scheduling software use either deterministic method or probabilistic method for computation of schedule durations, delays, lags and other parameters. In other words, these methods use only quantitative inputs leaving-out the qualitative aspects associated with individual activity of work. The qualitative aspect viz., the expertise of the mason or the lack of experience can have a significant impact on the assessed duration. Such qualitative aspects do not find adequate representation in the Project Scheduling software. A realistic project is considered for which a PERT chart has been prepared using showing all the major activities in reasonable detail. This project has been periodically updated until its completion. It is observed that some of the activities are delayed due to extraneous factors resulting in the overall delay of the project. The software has the capability to calculate the overall delay through CPM (Critical Path Method when each of the activity-delays is reported. We shall now demonstrate that by using fuzzy logic, these delays could have been predicted well in advance.

  7. General approaches in ensemble quantum computing

    Indian Academy of Sciences (India)

    V Vimalan; N Chandrakumar

    2008-01-01

    We have developed methodology for NMR quantum computing focusing on enhancing the efficiency of initialization, of logic gate implementation and of readout. Our general strategy involves the application of rotating frame pulse sequences to prepare pseudopure states and to perform logic operations. We demonstrate experimentally our methodology for both homonuclear and heteronuclear spin ensembles. On model two-spin systems, the initialization time of one of our sequences is three-fourths (in the heteronuclear case) or one-fourth (in the homonuclear case), of the typical pulsed free precession sequences, attaining the same initialization efficiency. We have implemented the logical SWAP operation in homonuclear AMX spin systems using selective isotropic mixing, reducing the duration taken to a third compared to the standard re-focused INEPT-type sequence. We introduce the 1D version for readout of the rotating frame SWAP operation, in an attempt to reduce readout time. We further demonstrate the Hadamard mode of 1D SWAP, which offers 2N-fold reduction in experiment time for a system with -working bits, attaining the same sensitivity as the standard 1D version.

  8. Multivariate analysis: A statistical approach for computations

    Science.gov (United States)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  9. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  10. Mutations that Cause Human Disease: A Computational/Experimental Approach

    Energy Technology Data Exchange (ETDEWEB)

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which

  11. Convergence Analysis of a Class of Computational Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Junfeng Chen

    2013-01-01

    Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.

  12. Mobile Cloud Computing: A Review on Smartphone Augmentation Approaches

    CERN Document Server

    Abolfazli, Saeid; Gani, Abdullah

    2012-01-01

    Smartphones have recently gained significant popularity in heavy mobile processing while users are increasing their expectations toward rich computing experience. However, resource limitations and current mobile computing advancements hinder this vision. Therefore, resource-intensive application execution remains a challenging task in mobile computing that necessitates device augmentation. In this article, smartphone augmentation approaches are reviewed and classified in two main groups, namely hardware and software. Generating high-end hardware is a subset of hardware augmentation approaches, whereas conserving local resource and reducing resource requirements approaches are grouped under software augmentation methods. Our study advocates that consreving smartphones' native resources, which is mainly done via task offloading, is more appropriate for already-developed applications than new ones, due to costly re-development process. Cloud computing has recently obtained momentous ground as one of the major co...

  13. What is intrinsic motivation? A typology of computational approaches

    Directory of Open Access Journals (Sweden)

    Pierre-Yves Oudeyer

    2009-11-01

    Full Text Available Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics.

  14. An Integrated Computer-Aided Approach for Environmental Studies

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Chen, Fei; Jaksland, Cecilia;

    1997-01-01

    A general framework for an integrated computer-aided approach to solve process design, control, and environmental problems simultaneously is presented. Physicochemical properties and their relationships to the molecular structure play an important role in the proposed integrated approach. The scope...... and applicability of the integrated approach is highlighted through examples involving estimation of properties and environmental pollution prevention. The importance of mixture effects on some environmentally important properties is also demonstrated....

  15. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    Science.gov (United States)

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. PMID:22658682

  16. Meta-analytic approach to the accurate prediction of secreted virulence effectors in gram-negative bacteria

    Directory of Open Access Journals (Sweden)

    Sato Yoshiharu

    2011-11-01

    Full Text Available Abstract Background Many pathogens use a type III secretion system to translocate virulence proteins (called effectors in order to adapt to the host environment. To date, many prediction tools for effector identification have been developed. However, these tools are insufficiently accurate for producing a list of putative effectors that can be applied directly for labor-intensive experimental verification. This also suggests that important features of effectors have yet to be fully characterized. Results In this study, we have constructed an accurate approach to predicting secreted virulence effectors from Gram-negative bacteria. This consists of a support vector machine-based discriminant analysis followed by a simple criteria-based filtering. The accuracy was assessed by estimating the average number of true positives in the top-20 ranking in the genome-wide screening. In the validation, 10 sets of 20 training and 20 testing examples were randomly selected from 40 known effectors of Salmonella enterica serovar Typhimurium LT2. On average, the SVM portion of our system predicted 9.7 true positives from 20 testing examples in the top-20 of the prediction. Removal of the N-terminal instability, codon adaptation index and ProtParam indices decreased the score to 7.6, 8.9 and 7.9, respectively. These discrimination features suggested that the following characteristics of effectors had been uncovered: unstable N-terminus, non-optimal codon usage, hydrophilic, and less aliphathic. The secondary filtering process represented by coexpression analysis and domain distribution analysis further refined the average true positive counts to 12.3. We further confirmed that our system can correctly predict known effectors of P. syringae DC3000, strongly indicating its feasibility. Conclusions We have successfully developed an accurate prediction system for screening effectors on a genome-wide scale. We confirmed the accuracy of our system by external validation

  17. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    Science.gov (United States)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  18. A hybrid stochastic-deterministic computational model accurately describes spatial dynamics and virus diffusion in HIV-1 growth competition assay.

    Science.gov (United States)

    Immonen, Taina; Gibson, Richard; Leitner, Thomas; Miller, Melanie A; Arts, Eric J; Somersalo, Erkki; Calvetti, Daniela

    2012-11-01

    We present a new hybrid stochastic-deterministic, spatially distributed computational model to simulate growth competition assays on a relatively immobile monolayer of peripheral blood mononuclear cells (PBMCs), commonly used for determining ex vivo fitness of human immunodeficiency virus type-1 (HIV-1). The novel features of our approach include incorporation of viral diffusion through a deterministic diffusion model while simulating cellular dynamics via a stochastic Markov chain model. The model accounts for multiple infections of target cells, CD4-downregulation, and the delay between the infection of a cell and the production of new virus particles. The minimum threshold level of infection induced by a virus inoculum is determined via a series of dilution experiments, and is used to determine the probability of infection of a susceptible cell as a function of local virus density. We illustrate how this model can be used for estimating the distribution of cells infected by either a single virus type or two competing viruses. Our model captures experimentally observed variation in the fitness difference between two virus strains, and suggests a way to minimize variation and dual infection in experiments.

  19. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...... thinking. We present two main theses on which the subject is based, and we present the included knowledge areas and didactical design principles. Finally we summarize the status and future plans for the subject and related development projects.......Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational...

  20. A tale of three bio-inspired computational approaches

    Science.gov (United States)

    Schaffer, J. David

    2014-05-01

    I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.

  1. Computational experiment approach to advanced secondary mathematics curriculum

    CERN Document Server

    Abramovich, Sergei

    2014-01-01

    This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...

  2. Computational biomechanics for medicine new approaches and new applications

    CERN Document Server

    Miller, Karol; Wittek, Adam; Nielsen, Poul

    2015-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologiesand advancements. Thisvolumecomprises twelve of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, France, Spain and Switzerland. Some of the interesting topics discussed are:real-time simulations; growth and remodelling of soft tissues; inverse and meshless solutions; medical image analysis; and patient-specific solid mechanics simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  3. A computational approach to evaluate the androgenic affinity of iprodione, procymidone, vinclozolin and their metabolites.

    Directory of Open Access Journals (Sweden)

    Corrado Lodovico Galli

    Full Text Available Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs and their metabolites towards the ligand binding domain (LBD of the androgen receptor (AR in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three. This evidence suggests that, in order not to over-/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy.

  4. A Unitifed Computational Approach to Oxide Aging Processes

    Energy Technology Data Exchange (ETDEWEB)

    Bowman, D.J.; Fleetwood, D.M.; Hjalmarson, H.P.; Schultz, P.A.

    1999-01-27

    In this paper we describe a unified, hierarchical computational approach to aging and reliability problems caused by materials changes in the oxide layers of Si-based microelectronic devices. We apply this method to a particular low-dose-rate radiation effects problem

  5. A computational approach to mechanistic and predictive toxicology of pesticides

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Vinggaard, Anne Marie; Hadrup, Niels;

    2014-01-01

    Emerging challenges of managing and interpreting large amounts of complex biological data have given rise to the growing field of computational biology. We investigated the applicability of an integrated systems toxicology approach on five selected pesticides to get an overview of their modes...

  6. Computer Forensics for Graduate Accountants: A Motivational Curriculum Design Approach

    Directory of Open Access Journals (Sweden)

    Grover Kearns

    2010-06-01

    Full Text Available Computer forensics involves the investigation of digital sources to acquire evidence that can be used in a court of law. It can also be used to identify and respond to threats to hosts and systems. Accountants use computer forensics to investigate computer crime or misuse, theft of trade secrets, theft of or destruction of intellectual property, and fraud. Education of accountants to use forensic tools is a goal of the AICPA (American Institute of Certified Public Accountants. Accounting students, however, may not view information technology as vital to their career paths and need motivation to acquire forensic knowledge and skills. This paper presents a curriculum design methodology for teaching graduate accounting students computer forensics. The methodology is tested using perceptions of the students about the success of the methodology and their acquisition of forensics knowledge and skills. An important component of the pedagogical approach is the use of an annotated list of over 50 forensic web-based tools.

  7. An engineering based approach for hydraulic computations in river flows

    Science.gov (United States)

    Di Francesco, S.; Biscarini, C.; Pierleoni, A.; Manciola, P.

    2016-06-01

    This paper presents an engineering based approach for hydraulic risk evaluation. The aim of the research is to identify a criteria for the choice of the simplest and appropriate model to use in different scenarios varying the characteristics of main river channel. The complete flow field, generally expressed in terms of pressure, velocities, accelerations can be described through a three dimensional approach that consider all the flow properties varying in all directions. In many practical applications for river flow studies, however, the greatest changes occur only in two dimensions or even only in one. In these cases the use of simplified approaches can lead to accurate results, with easy to build and faster simulations. The study has been conducted taking in account a dimensionless parameter of channels (ratio of curvature radius and width of the channel (R/B).

  8. Cloud computing approaches to accelerate drug discovery value chain.

    Science.gov (United States)

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  9. An ONIOM study of the Bergman reaction: a computationally efficient and accurate method for modeling the enediyne anticancer antibiotics

    Science.gov (United States)

    Feldgus, Steven; Shields, George C.

    2001-10-01

    The Bergman cyclization of large polycyclic enediyne systems that mimic the cores of the enediyne anticancer antibiotics was studied using the ONIOM hybrid method. Tests on small enediynes show that ONIOM can accurately match experimental data. The effect of the triggering reaction in the natural products is investigated, and we support the argument that it is strain effects that lower the cyclization barrier. The barrier for the triggered molecule is very low, leading to a reasonable half-life at biological temperatures. No evidence is found that would suggest a concerted cyclization/H-atom abstraction mechanism is necessary for DNA cleavage.

  10. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    Science.gov (United States)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-08-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  11. Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact

    Directory of Open Access Journals (Sweden)

    Abadjiev Valentin

    2014-09-01

    Full Text Available The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters.

  12. An evolutionary computational approach for the dynamic Stackelberg competition problems

    Directory of Open Access Journals (Sweden)

    Lorena Arboleda-Castro

    2016-06-01

    Full Text Available Stackelberg competition models are an important family of economical decision problems from game theory, in which the main goal is to find optimal strategies between two competitors taking into account their hierarchy relationship. Although these models have been widely studied in the past, it is important to note that very few works deal with uncertainty scenarios, especially those that vary over time. In this regard, the present research studies this topic and proposes a computational method for solving efficiently dynamic Stackelberg competition models. The computational experiments suggest that the proposed approach is effective for problems of this nature.

  13. Fragment-based approaches and computer-aided drug discovery.

    Science.gov (United States)

    Rognan, Didier

    2012-01-01

    Fragment-based design has significantly modified drug discovery strategies and paradigms in the last decade. Besides technological advances and novel therapeutic avenues, one of the most significant changes brought by this new discipline has occurred in the minds of drug designers. Fragment-based approaches have markedly impacted rational computer-aided design both in method development and in applications. The present review illustrates the importance of molecular fragments in many aspects of rational ligand design, and discusses how thinking in "fragment space" has boosted computational biology and chemistry. PMID:21710380

  14. Computational approaches for rational design of proteins with novel functionalities

    DEFF Research Database (Denmark)

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar;

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has...... exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo...... functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes....

  15. Computer vision approaches to medical image analysis. Revised papers

    International Nuclear Information System (INIS)

    This book constitutes the thoroughly refereed post proceedings of the international workshop Computer Vision Approaches to Medical Image Analysis, CVAMIA 2006, held in Graz, Austria in May 2006 as a satellite event of the 9th European Conference on Computer Vision, EECV 2006. The 10 revised full papers and 11 revised poster papers presented together with 1 invited talk were carefully reviewed and selected from 38 submissions. The papers are organized in topical sections on clinical applications, image registration, image segmentation and analysis, and the poster session. (orig.)

  16. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    International Nuclear Information System (INIS)

    Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  17. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Keshavarz, Mohammad Hossein, E-mail: mhkeshavarz@mut-es.ac.ir [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115, Isfahan, Islamic Republic of Iran (Iran, Islamic Republic of); Gharagheizi, Farhad [Department of Chemical Engineering, Buinzahra Branch, Islamic Azad University, Buinzahra, Islamic Republic of Iran (Iran, Islamic Republic of); Shokrolahi, Arash; Zakinejad, Sajjad [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115, Isfahan, Islamic Republic of Iran (Iran, Islamic Republic of)

    2012-10-30

    Highlights: Black-Right-Pointing-Pointer A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. Black-Right-Pointing-Pointer There is no need to use QSAR and QSTR methods, which are based on computer codes. Black-Right-Pointing-Pointer The predicted results of 58 compounds are more reliable than those predicted by QSTR method. Black-Right-Pointing-Pointer The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD{sub 50} with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure-toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  18. Toroidal figures of equilibrium from a 2nd-order accurate, accelerated SCF-method with subgrid approach

    CERN Document Server

    Huré, J -M

    2016-01-01

    We compute the structure of a self-gravitating torus with polytropic equation-of-state (EOS) rotating in an imposed centrifugal potential. The Poisson-solver is based on isotropic multigrid with optimal covering factor (fluid section-to-grid area ratio). We work at $2$nd-order in the grid resolution for both finite difference and quadrature schemes. For soft EOS (i.e. polytropic index $n \\ge 1$), the underlying $2$nd-order is naturally recovered for Boundary Values (BVs) and any other integrated quantity sensitive to the mass density (mass, angular momentum, volume, Virial Parameter, etc.), i.e. errors vary with the number $N$ of nodes per direction as $\\sim 1/N^2$. This is, however, not observed for purely geometrical quantities (surface area, meridional section area, volume), unless a subgrid approach is considered (i.e. boundary detection). Equilibrium sequences are also much better described, especially close to critical rotation. Yet another technical effort is required for hard EOS ($n < 1$), due to ...

  19. Style: A Computational and Conceptual Blending-Based Approach

    Science.gov (United States)

    Goguen, Joseph A.; Harrell, D. Fox

    This chapter proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, including interactive multimedia poetry, although the approach generalizes to other media. The central idea is to generate multimedia content and analyze style in terms of blending principles, based on our finding that different principles from those of common sense blending are often needed for some contemporary poetic metaphors.

  20. Development of a computationally efficient urban flood modelling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Ntegeka, Victor; Murla, Damian;

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...

  1. Development of a computationally efficient urban modeling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;

    2016-01-01

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...

  2. Computer Mechatronics: A Radical Approach to Mechatronics Education

    OpenAIRE

    Nilsson, Martin

    2005-01-01

    This paper describes some distinguishing features of a course on mechatronics, based on computer science. We propose a teaching approach called Controlled Problem-Based Learning (CPBL). We have applied this method on three generations (2003-2005) of mainly fourth-year undergraduate students at Lund University (LTH). Although students found the course difficult, there were no dropouts, and all students attended the examination 2005.

  3. Transparency and deliberation within the FOMC: a computational linguistics approach

    OpenAIRE

    Hansen, Stephen; McMahon, Michael; Prat, Andrea

    2014-01-01

    How does transparency, a key feature of central bank design, affect the deliberation of monetary policymakers? We exploit a natural experiment in the Federal Open Market Committee in 1993 together with computational linguistic models (particularly Latent Dirichlet Allocation) to measure the effect of increased transparency on debate. Commentators have hypothesized both a beneficial discipline effect and a detrimental conformity effect. A difference-in-differences approach inspired by the care...

  4. WSRC approach to validation of criticality safety computer codes

    International Nuclear Information System (INIS)

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (Keff) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed

  5. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    Science.gov (United States)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  6. Reservoir Computing approach to Great Lakes water level forecasting

    Science.gov (United States)

    Coulibaly, Paulin

    2010-02-01

    SummaryThe use of echo state network (ESN) for dynamical system modeling is known as Reservoir Computing and has been shown to be effective for a number of applications, including signal processing, learning grammatical structure, time series prediction and motor/system control. However, the performance of Reservoir Computing approach on hydrological time series remains largely unexplored. This study investigates the potential of ESN or Reservoir Computing for long-term prediction of lake water levels. Great Lakes water levels from 1918 to 2005 are used to develop and evaluate the ESN models. The forecast performance of the ESN-based models is compared with the results obtained from two benchmark models, the conventional recurrent neural network (RNN) and the Bayesian neural network (BNN). The test results indicate a strong ability of ESN models to provide improved lake level forecasts up to 10-month ahead - suggesting that the inherent structure and innovative learning approach of the ESN is suitable for hydrological time series modeling. Another particular advantage of ESN learning approach is that it simplifies the network training complexity and avoids the limitations inherent to the gradient descent optimization method. Overall, it is shown that the ESN can be a good alternative method for improved lake level forecasting, performing better than both the RNN and the BNN on the four selected Great Lakes time series, namely, the Lakes Erie, Huron-Michigan, Ontario, and Superior.

  7. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations.

    Science.gov (United States)

    Harb, Moussab

    2015-10-14

    Using accurate first-principles quantum calculations based on DFT (including the DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we can predict the essential fundamental properties (such as bandgap, optical absorption co-efficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit a relatively high absorption efficiency in the visible range, high dielectric constant, high charge carrier mobility and much lower exciton binding energy than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties were found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices such as Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications. PMID:26351755

  8. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations

    KAUST Repository

    Harb, Moussab

    2015-08-26

    Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.

  9. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    Science.gov (United States)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  10. A New Approach for Quality Management in Pervasive Computing Environments

    Directory of Open Access Journals (Sweden)

    Alti Adel

    2013-01-01

    Full Text Available This paper provides an extension of MDA called Context-aware Quality Model Driven Architecture (CQ-MDA which can be used for quality control in pervasive computing environments. The proposed CQ-MDA approach based on ContextualArchRQMM (Contextual ARCHitecture Quality Requirement MetaModel, being an extension to the MDA, allows for considering quality and resources-awareness while conducting the design process. The contributions of this paper are a meta-model for architecture quality control of context-aware applications and a model driven approach to separate architecture concerns from context and quality concerns and to configure reconfigurable software architectures of distributed systems. To demonstrate the utility of our approach, we use a videoconference system.

  11. Computational study of the reactions of methanol with the hydroperoxyl and methyl radicals. 1. Accurate thermochemistry and barrier heights.

    Science.gov (United States)

    Alecu, I M; Truhlar, Donald G

    2011-04-01

    The reactions of CH(3)OH with the HO(2) and CH(3) radicals are important in the combustion of methanol and are prototypes for reactions of heavier alcohols in biofuels. The reaction energies and barrier heights for these reaction systems are computed with CCSD(T) theory extrapolated to the complete basis set limit using correlation-consistent basis sets, both augmented and unaugmented, and further refined by including a fully coupled treatment of the connected triple excitations, a second-order perturbative treatment of quadruple excitations (by CCSDT(2)(Q)), core-valence corrections, and scalar relativistic effects. It is shown that the M08-HX and M08-SO hybrid meta-GGA density functionals can achieve sub-kcal mol(-1) agreement with the high-level ab initio results, identifying these functionals as important potential candidates for direct dynamics studies on the rates of these and homologous reaction systems. PMID:21405059

  12. A simple grand canonical approach to compute the vapor pressure of bulk and finite size systems

    Energy Technology Data Exchange (ETDEWEB)

    Factorovich, Matías H.; Scherlis, Damián A. [Departamento de Química Inorgánica, Analítica y Química Física/INQUIMAE, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pab. II, Buenos Aires C1428EHA (Argentina); Molinero, Valeria [Department of Chemistry, University of Utah, 315 South 1400 East, Salt Lake City, Utah 84112-0850 (United States)

    2014-02-14

    In this article we introduce a simple grand canonical screening (GCS) approach to accurately compute vapor pressures from molecular dynamics or Monte Carlo simulations. This procedure entails a screening of chemical potentials using a conventional grand canonical scheme, and therefore it is straightforward to implement for any kind of interface. The scheme is validated against data obtained from Gibbs ensemble simulations for water and argon. Then, it is applied to obtain the vapor pressure of the coarse-grained mW water model, and it is shown that the computed value is in excellent accord with the one formally deduced using statistical thermodynamics arguments. Finally, this methodology is used to calculate the vapor pressure of a water nanodroplet of 94 molecules. Interestingly, the result is in perfect agreement with the one predicted by the Kelvin equation for a homogeneous droplet of that size.

  13. Use of CMEIAS Image Analysis Software to Accurately Compute Attributes of Cell Size, Morphology, Spatial Aggregation and Color Segmentation that Signify in Situ Ecophysiological Adaptations in Microbial Biofilm Communities

    Directory of Open Access Journals (Sweden)

    Frank B. Dazzo

    2015-03-01

    Full Text Available In this review, we describe computational features of computer-assisted microscopy that are unique to the Center for Microbial Ecology Image Analysis System (CMEIAS software, and examples illustrating how they can be used to gain ecophysiological insights into microbial adaptations occurring at micrometer spatial scales directly relevant to individual cells occupying their ecological niches in situ. These features include algorithms that accurately measure (1 microbial cell length relevant to avoidance of protozoan bacteriovory; (2 microbial biovolume body mass relevant to allometric scaling and local apportionment of growth-supporting nutrient resources; (3 pattern recognition rules for morphotype classification of diverse microbial communities relevant to their enhanced fitness for success in the particular habitat; (4 spatial patterns of coaggregation that reveal the local intensity of cooperative vs. competitive adaptations in colonization behavior relevant to microbial biofilm ecology; and (5 object segmentation of complex color images to differentiate target microbes reporting successful cell-cell communication. These unique computational features contribute to the CMEIAS mission of developing accurate and freely accessible tools of image bioinformatics that strengthen microscopy-based approaches for understanding microbial ecology at single-cell resolution.

  14. A low-computational approach on gaze estimation with eye touch system.

    Science.gov (United States)

    Topal, Cihan; Gunal, Serkan; Koçdeviren, Onur; Doğan, Atakan; Gerek, Ömer Nezih

    2014-02-01

    Among various approaches to eye tracking systems, light-reflection based systems with non-imaging sensors, e.g., photodiodes or phototransistors, are known to have relatively low complexity; yet, they provide moderately accurate estimation of the point of gaze. In this paper, a low-computational approach on gaze estimation is proposed using the Eye Touch system, which is a light-reflection based eye tracking system, previously introduced by the authors. Based on the physical implementation of Eye Touch, the sensor measurements are now utilized in low-computational least-squares algorithms to estimate arbitrary gaze directions, unlike the existing light reflection-based systems, including the initial Eye Touch implementation, where only limited predefined regions were distinguished. The system also utilizes an effective pattern classification algorithm to be able to perform left, right, and double clicks based on respective eye winks with significantly high accuracy. In order to avoid accuracy problems for sensitive sensor biasing hardware, a robust custom microcontroller-based data acquisition system is developed. Consequently, the physical size and cost of the overall Eye Touch system are considerably reduced while the power efficiency is improved. The results of the experimental analysis over numerous subjects clearly indicate that the proposed eye tracking system can classify eye winks with 98% accuracy, and attain an accurate gaze direction with an average angular error of about 0.93 °. Due to its lightweight structure, competitive accuracy and low-computational requirements relative to video-based eye tracking systems, the proposed system is a promising human-computer interface for both stationary and mobile eye tracking applications.

  15. SPINET: A Parallel Computing Approach to Spine Simulations

    Directory of Open Access Journals (Sweden)

    Peter G. Kropf

    1996-01-01

    Full Text Available Research in scientitic programming enables us to realize more and more complex applications, and on the other hand, application-driven demands on computing methods and power are continuously growing. Therefore, interdisciplinary approaches become more widely used. The interdisciplinary SPINET project presented in this article applies modern scientific computing tools to biomechanical simulations: parallel computing and symbolic and modern functional programming. The target application is the human spine. Simulations of the spine help us to investigate and better understand the mechanisms of back pain and spinal injury. Two approaches have been used: the first uses the finite element method for high-performance simulations of static biomechanical models, and the second generates a simulation developmenttool for experimenting with different dynamic models. A finite element program for static analysis has been parallelized for the MUSIC machine. To solve the sparse system of linear equations, a conjugate gradient solver (iterative method and a frontal solver (direct method have been implemented. The preprocessor required for the frontal solver is written in the modern functional programming language SML, the solver itself in C, thus exploiting the characteristic advantages of both functional and imperative programming. The speedup analysis of both solvers show very satisfactory results for this irregular problem. A mixed symbolic-numeric environment for rigid body system simulations is presented. It automatically generates C code from a problem specification expressed by the Lagrange formalism using Maple.

  16. Understanding Plant Nitrogen Metabolism through Metabolomics and Computational Approaches

    Directory of Open Access Journals (Sweden)

    Perrin H. Beatty

    2016-10-01

    Full Text Available A comprehensive understanding of plant metabolism could provide a direct mechanism for improving nitrogen use efficiency (NUE in crops. One of the major barriers to achieving this outcome is our poor understanding of the complex metabolic networks, physiological factors, and signaling mechanisms that affect NUE in agricultural settings. However, an exciting collection of computational and experimental approaches has begun to elucidate whole-plant nitrogen usage and provides an avenue for connecting nitrogen-related phenotypes to genes. Herein, we describe how metabolomics, computational models of metabolism, and flux balance analysis have been harnessed to advance our understanding of plant nitrogen metabolism. We introduce a model describing the complex flow of nitrogen through crops in a real-world agricultural setting and describe how experimental metabolomics data, such as isotope labeling rates and analyses of nutrient uptake, can be used to refine these models. In summary, the metabolomics/computational approach offers an exciting mechanism for understanding NUE that may ultimately lead to more effective crop management and engineered plants with higher yields.

  17. Computational approaches to detect allosteric pathways in transmembrane molecular machines.

    Science.gov (United States)

    Stolzenberg, Sebastian; Michino, Mayako; LeVine, Michael V; Weinstein, Harel; Shi, Lei

    2016-07-01

    Many of the functions of transmembrane proteins involved in signal processing and transduction across the cell membrane are determined by allosteric couplings that propagate the functional effects well beyond the original site of activation. Data gathered from breakthroughs in biochemistry, crystallography, and single molecule fluorescence have established a rich basis of information for the study of molecular mechanisms in the allosteric couplings of such transmembrane proteins. The mechanistic details of these couplings, many of which have therapeutic implications, however, have only become accessible in synergy with molecular modeling and simulations. Here, we review some recent computational approaches that analyze allosteric coupling networks (ACNs) in transmembrane proteins, and in particular the recently developed Protein Interaction Analyzer (PIA) designed to study ACNs in the structural ensembles sampled by molecular dynamics simulations. The power of these computational approaches in interrogating the functional mechanisms of transmembrane proteins is illustrated with selected examples of recent experimental and computational studies pursued synergistically in the investigation of secondary active transporters and GPCRs. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. PMID:26806157

  18. Staging of osteonecrosis of the jaw requires computed tomography for accurate definition of the extent of bony disease.

    Science.gov (United States)

    Bedogni, Alberto; Fedele, Stefano; Bedogni, Giorgio; Scoletta, Matteo; Favia, Gianfranco; Colella, Giuseppe; Agrillo, Alessandro; Bettini, Giordana; Di Fede, Olga; Oteri, Giacomo; Fusco, Vittorio; Gabriele, Mario; Ottolenghi, Livia; Valsecchi, Stefano; Porter, Stephen; Petruzzi, Massimo; Arduino, Paolo; D'Amato, Salvatore; Ungari, Claudio; Fung Polly, Pok-Lam; Saia, Giorgia; Campisi, Giuseppina

    2014-09-01

    Management of osteonecrosis of the jaw associated with antiresorptive agents is challenging, and outcomes are unpredictable. The severity of disease is the main guide to management, and can help to predict prognosis. Most available staging systems for osteonecrosis, including the widely-used American Association of Oral and Maxillofacial Surgeons (AAOMS) system, classify severity on the basis of clinical and radiographic findings. However, clinical inspection and radiography are limited in their ability to identify the extent of necrotic bone disease compared with computed tomography (CT). We have organised a large multicentre retrospective study (known as MISSION) to investigate the agreement between the AAOMS staging system and the extent of osteonecrosis of the jaw (focal compared with diffuse involvement of bone) as detected on CT. We studied 799 patients with detailed clinical phenotyping who had CT images taken. Features of diffuse bone disease were identified on CT within all AAOMS stages (20%, 8%, 48%, and 24% of patients in stages 0, 1, 2, and 3, respectively). Of the patients classified as stage 0, 110/192 (57%) had diffuse disease on CT, and about 1 in 3 with CT evidence of diffuse bone disease was misclassified by the AAOMS system as having stages 0 and 1 osteonecrosis. In addition, more than a third of patients with AAOMS stage 2 (142/405, 35%) had focal bone disease on CT. We conclude that the AAOMS staging system does not correctly identify the extent of bony disease in patients with osteonecrosis of the jaw.

  19. The Cambridge Face Tracker: Accurate, Low Cost Measurement of Head Posture Using Computer Vision and Face Recognition Software

    Science.gov (United States)

    Thomas, Peter B. M.; Baltrušaitis, Tadas; Robinson, Peter; Vivian, Anthony J.

    2016-01-01

    Purpose We validate a video-based method of head posture measurement. Methods The Cambridge Face Tracker uses neural networks (constrained local neural fields) to recognize facial features in video. The relative position of these facial features is used to calculate head posture. First, we assess the accuracy of this approach against videos in three research databases where each frame is tagged with a precisely measured head posture. Second, we compare our method to a commercially available mechanical device, the Cervical Range of Motion device: four subjects each adopted 43 distinct head postures that were measured using both methods. Results The Cambridge Face Tracker achieved confident facial recognition in 92% of the approximately 38,000 frames of video from the three databases. The respective mean error in absolute head posture was 3.34°, 3.86°, and 2.81°, with a median error of 1.97°, 2.16°, and 1.96°. The accuracy decreased with more extreme head posture. Comparing The Cambridge Face Tracker to the Cervical Range of Motion Device gave correlation coefficients of 0.99 (P < 0.0001), 0.96 (P < 0.0001), and 0.99 (P < 0.0001) for yaw, pitch, and roll, respectively. Conclusions The Cambridge Face Tracker performs well under real-world conditions and within the range of normally-encountered head posture. It allows useful quantification of head posture in real time or from precaptured video. Its performance is similar to that of a clinically validated mechanical device. It has significant advantages over other approaches in that subjects do not need to wear any apparatus, and it requires only low cost, easy-to-setup consumer electronics. Translational Relevance Noncontact assessment of head posture allows more complete clinical assessment of patients, and could benefit surgical planning in future. PMID:27730008

  20. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  1. Identifying Pathogenicity Islands in Bacterial Pathogenomics Using Computational Approaches

    Directory of Open Access Journals (Sweden)

    Dongsheng Che

    2014-01-01

    Full Text Available High-throughput sequencing technologies have made it possible to study bacteria through analyzing their genome sequences. For instance, comparative genome sequence analyses can reveal the phenomenon such as gene loss, gene gain, or gene exchange in a genome. By analyzing pathogenic bacterial genomes, we can discover that pathogenic genomic regions in many pathogenic bacteria are horizontally transferred from other bacteria, and these regions are also known as pathogenicity islands (PAIs. PAIs have some detectable properties, such as having different genomic signatures than the rest of the host genomes, and containing mobility genes so that they can be integrated into the host genome. In this review, we will discuss various pathogenicity island-associated features and current computational approaches for the identification of PAIs. Existing pathogenicity island databases and related computational resources will also be discussed, so that researchers may find it to be useful for the studies of bacterial evolution and pathogenicity mechanisms.

  2. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  3. Computational approaches for rational design of proteins with novel functionalities

    Directory of Open Access Journals (Sweden)

    Manish Kumar Tiwari

    2012-09-01

    Full Text Available Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes.

  4. Computational approaches for rational design of proteins with novel functionalities.

    Science.gov (United States)

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes.

  5. Predictive value of multi-detector computed tomography for accurate diagnosis of serous cystadenoma: Radiologic-pathologic correlation

    Institute of Scientific and Technical Information of China (English)

    Anjuli A Shah; Nisha I Sainani; Avinash Kambadakone Ramesh; Zarine K Shah; Vikram Deshpande; Peter F Hahn; Dushyant V Sahani

    2009-01-01

    AIM:To identify multi-detector computed tomography (MDCT) features mos t predi c t i ve of serous cystadenomas (SCAs),correlating with histopathology,and to study the impact of cyst size and MDCT technique on reader performance.METHODS:The MDCT scans of 164 patients with surgically verified pancreatic cystic lesions were reviewed by two readers to study the predictive value of various morphological features for establishing a diagnosis of SCAs.Accuracy in lesion characterization and reader confidence were correlated with lesion size (≤3 cm or ≥3 cm) and scanning protocols (dedicated vs routine).RESULTS:28/164 cysts (mean size,39 mm;range,8-92 mm) were diagnosed as SCA on pathology.The MDCT features predictive of diagnosis of SCA were microcystic appearance (22/28,78.6%),surface lobulations (25/28,89.3%) and central scar (9/28,32.4%).Stepwise logistic regression analysis showed that only microcystic appearance was significant for CT diagnosis of SCA (P=0.0001).The sensitivity,specificity and PPV of central scar and of combined microcystic appearance and lobulations were 32.4%/100%/100% and 68%/100%/100%,respectively.The reader confidence was higher for lesions>3 cm (P=0.02) and for MDCT scans performed using thin collimation (1.25-2.5 mm) compared to routine 5 mm collimation exams (P>0.05).CONCLUSION:Central scar on MDCT is diagnostic of SCA but is seen in only one third of SCAs.Microcystic morphology is the most significant CT feature in diagnosis of SCA.A combination of microcystic appearance and surface lobulations offers accuracy comparable to central scar with higher sensitivity.

  6. A pencil beam approach to proton computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Rescigno, Regina, E-mail: regina.rescigno@iphc.cnrs.fr; Bopp, Cécile; Rousseau, Marc; Brasse, David [Université de Strasbourg, IPHC, 23 rue du Loess, Strasbourg 67037, France and CNRS, UMR7178, Strasbourg 67037 (France)

    2015-11-15

    Purpose: A new approach to proton computed tomography (pCT) is presented. In this approach, protons are not tracked one-by-one but a beam of particles is considered instead. The elements of the pCT reconstruction problem (residual energy and path) are redefined on the basis of this new approach. An analytical image reconstruction algorithm applicable to this scenario is also proposed. Methods: The pencil beam (PB) and its propagation in matter were modeled by making use of the generalization of the Fermi–Eyges theory to account for multiple Coulomb scattering (MCS). This model was integrated into the pCT reconstruction problem, allowing the definition of the mean beam path concept similar to the most likely path (MLP) used in the single-particle approach. A numerical validation of the model was performed. The algorithm of filtered backprojection along MLPs was adapted to the beam-by-beam approach. The acquisition of a perfect proton scan was simulated and the data were used to reconstruct images of the relative stopping power of the phantom with the single-proton and beam-by-beam approaches. The resulting images were compared in a qualitative way. Results: The parameters of the modeled PB (mean and spread) were compared to Monte Carlo results in order to validate the model. For a water target, good agreement was found for the mean value of the distributions. As far as the spread is concerned, depth-dependent discrepancies as large as 2%–3% were found. For a heterogeneous phantom, discrepancies in the distribution spread ranged from 6% to 8%. The image reconstructed with the beam-by-beam approach showed a high level of noise compared to the one reconstructed with the classical approach. Conclusions: The PB approach to proton imaging may allow technical challenges imposed by the current proton-by-proton method to be overcome. In this framework, an analytical algorithm is proposed. Further work will involve a detailed study of the performances and limitations of

  7. Computational systems biology approaches to anti-angiogenic cancer therapeutics.

    Science.gov (United States)

    Finley, Stacey D; Chu, Liang-Hui; Popel, Aleksander S

    2015-02-01

    Angiogenesis is an exquisitely regulated process that is required for physiological processes and is also important in numerous diseases. Tumors utilize angiogenesis to generate the vascular network needed to supply the cancer cells with nutrients and oxygen, and many cancer drugs aim to inhibit tumor angiogenesis. Anti-angiogenic therapy involves inhibiting multiple cell types, molecular targets, and intracellular signaling pathways. Computational tools are useful in guiding treatment strategies, predicting the response to treatment, and identifying new targets of interest. Here, we describe progress that has been made in applying mathematical modeling and bioinformatics approaches to study anti-angiogenic therapeutics in cancer.

  8. A New Computational Scheme for Computing Greeks by the Asymptotic Expansion Approach

    OpenAIRE

    Matsuoka, Ryosuke; Takahashi, Akihiko; Uchida, Yoshihiko

    2005-01-01

    We developed a new scheme for computing "Greeks"of derivatives by an asymptotic expansion approach. In particular, we derived analytical approximation formulae for deltas and Vegas of plain vanilla and av-erage European call options under general Markovian processes of underlying asset prices. Moreover, we introduced a new variance reduction method of Monte Carlo simulations based on the asymptotic expansion scheme. Finally, several numerical examples under CEV processes con?rmed the validity...

  9. A Dynamic Bayesian Approach to Computational Laban Shape Quality Analysis

    Directory of Open Access Journals (Sweden)

    Dilip Swaminathan

    2009-01-01

    kinesiology. LMA (especially Effort/Shape emphasizes how internal feelings and intentions govern the patterning of movement throughout the whole body. As we argue, a complex understanding of intention via LMA is necessary for human-computer interaction to become embodied in ways that resemble interaction in the physical world. We thus introduce a novel, flexible Bayesian fusion approach for identifying LMA Shape qualities from raw motion capture data in real time. The method uses a dynamic Bayesian network (DBN to fuse movement features across the body and across time and as we discuss can be readily adapted for low-cost video. It has delivered excellent performance in preliminary studies comprising improvisatory movements. Our approach has been incorporated in Response, a mixed-reality environment where users interact via natural, full-body human movement and enhance their bodily-kinesthetic awareness through immersive sound and light feedback, with applications to kinesiology training, Parkinson's patient rehabilitation, interactive dance, and many other areas.

  10. Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations

    Institute of Scientific and Technical Information of China (English)

    Junaid Ali Khan; Muhammad Asif Zahoor Raja; Ijaz Mansoor Qureshi

    2011-01-01

    @@ We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs).The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error.The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique.The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations.We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods.The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy.With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.%We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.

  11. A new approach in CHP steam turbines thermodynamic cycles computations

    Directory of Open Access Journals (Sweden)

    Grković Vojin R.

    2012-01-01

    Full Text Available This paper presents a new approach in mathematical modeling of thermodynamic cycles and electric power of utility district-heating and cogeneration steam turbines. The approach is based on the application of the dimensionless mass flows, which describe the thermodynamic cycle of a combined heat and power steam turbine. The mass flows are calculated relative to the mass flow to low pressure turbine. The procedure introduces the extraction mass flow load parameter νh which clearly indicates the energy transformation process, as well as the cogeneration turbine design features, but also its fitness for the electrical energy system requirements. The presented approach allows fast computations, as well as direct calculation of the selected energy efficiency indicators. The approach is exemplified with the calculation results of the district heat power to electric power ratio, as well as the cycle efficiency, versus νh. The influence of νh on the conformity of a combined heat and power turbine to the grid requirements is also analyzed and discussed. [Projekat Ministarstva nauke Republike Srbije, br. 33049: Development of CHP demonstration plant with gasification of biomass

  12. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    Science.gov (United States)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  13. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  14. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  15. An approach to estimating and extrapolating model error based on inverse problem methods: towards accurate numerical weather prediction

    International Nuclear Information System (INIS)

    Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP. (geophysics, astronomy, and astrophysics)

  16. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    Science.gov (United States)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  17. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    Science.gov (United States)

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor.

  18. The advantage of the three dimensional computed tomographic (3 D-CT for ensuring accurate bone incision in sagittal split ramus osteotomy

    Directory of Open Access Journals (Sweden)

    Coen Pramono D

    2005-03-01

    Full Text Available Functional and aesthetic dysgnathia surgery requires accurate pre-surgical planning, including the surgical technique to be used related with the difference of anatomical structures amongst individuals. Programs that simulate the surgery become increasingly important. This can be mediated by using a surgical model, conventional x-rays as panoramic, cephalometric projections and another sophisticated method such as a three dimensional computed tomography (3 D-CT. A patient who had undergone double jaw surgeries with difficult anatomical landmarks was presented. In this case the mandible foramens were seen highly relatively related to the sigmoid notches. Therefore, ensuring the bone incisions in sagittal split was presumed to be difficult. A 3D-CT was made and considered to be very helpful in supporting the pre-operative diagnostic.

  19. A Computational Drug Repositioning Approach for Targeting Oncogenic Transcription Factors.

    Science.gov (United States)

    Gayvert, Kaitlyn M; Dardenne, Etienne; Cheung, Cynthia; Boland, Mary Regina; Lorberbaum, Tal; Wanjala, Jackline; Chen, Yu; Rubin, Mark A; Tatonetti, Nicholas P; Rickman, David S; Elemento, Olivier

    2016-06-14

    Mutations in transcription factor (TF) genes are frequently observed in tumors, often leading to aberrant transcriptional activity. Unfortunately, TFs are often considered undruggable due to the absence of targetable enzymatic activity. To address this problem, we developed CRAFTT, a computational drug-repositioning approach for targeting TF activity. CRAFTT combines ChIP-seq with drug-induced expression profiling to identify small molecules that can specifically perturb TF activity. Application to ENCODE ChIP-seq datasets revealed known drug-TF interactions, and a global drug-protein network analysis supported these predictions. Application of CRAFTT to ERG, a pro-invasive, frequently overexpressed oncogenic TF, predicted that dexamethasone would inhibit ERG activity. Dexamethasone significantly decreased cell invasion and migration in an ERG-dependent manner. Furthermore, analysis of electronic medical record data indicates a protective role for dexamethasone against prostate cancer. Altogether, our method provides a broadly applicable strategy for identifying drugs that specifically modulate TF activity. PMID:27264179

  20. Leaching from Heterogeneous Heck Catalysts: A Computational Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The possibility of carrying out a purely heterogeneous Heck reaction in practice without Pd leaching has been previously considered by a number of research groups but no general consent has yet arrived. Here, the reaction was, for the first time, evaluated by a simple computational approach. Modelling experiments were performed on one of the initial catalytic steps: phenyl halides attachment on Pd (111) to (100) and (111) to (111) ridges of a Pd crystal. Three surface structures of resulting [PhPdX] were identified as possible reactive intermediates. Following potential energy minimisation calculations based on a universal force field, the relative stabilities of these surface species were then determined. Results showed the most stable species to be one in which a Pd ridge atom is removed from the Pd crystal structure, suggesting Pd leaching induced by phenyl halides is energetically favourable.

  1. Systems approaches to computational modeling of the oral microbiome

    Directory of Open Access Journals (Sweden)

    Dimiter V. Dimitrov

    2013-07-01

    Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders

  2. Vehicular traffic noise prediction using soft computing approach.

    Science.gov (United States)

    Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek

    2016-12-01

    A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data.

  3. Computer Modeling of Violent Intent: A Content Analysis Approach

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  4. Computer Aided Interpretation Approach for Optical Tomographic Images

    CERN Document Server

    Klose, Christian D; Netz, Uwe; Beuthan, Juergen; Hielscher, Andreas H

    2010-01-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) of human finger joints in optical tomographic images. The image interpretation method employs a multi-variate signal detection analysis aided by a machine learning classification algorithm, called Self-Organizing Mapping (SOM). Unlike in previous studies, this allows for combining multiple physical image parameters, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging and inspection of optical tomographic images), were used as "ground truth"-benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities while...

  5. Computational Approach to Seasonal Changes of Living Leaves

    Directory of Open Access Journals (Sweden)

    Ying Tang

    2013-01-01

    Full Text Available This paper proposes a computational approach to seasonal changes of living leaves by combining the geometric deformations and textural color changes. The geometric model of a leaf is generated by triangulating the scanned image of a leaf using an optimized mesh. The triangular mesh of the leaf is deformed by the improved mass-spring model, while the deformation is controlled by setting different mass values for the vertices on the leaf model. In order to adaptively control the deformation of different regions in the leaf, the mass values of vertices are set to be in proportion to the pixels' intensities of the corresponding user-specified grayscale mask map. The geometric deformations as well as the textural color changes of a leaf are used to simulate the seasonal changing process of leaves based on Markov chain model with different environmental parameters including temperature, humidness, and time. Experimental results show that the method successfully simulates the seasonal changes of leaves.

  6. Computational Diagnostic: A Novel Approach to View Medical Data.

    Energy Technology Data Exchange (ETDEWEB)

    Mane, K. K. (Ketan Kirtiraj); Börner, K. (Katy)

    2007-01-01

    A transition from traditional paper-based medical records to electronic health record is largely underway. The use of electronic records offers tremendous potential to personalize patient diagnosis and treatment. In this paper, we discuss a computational diagnostic tool that uses digital medical records to help doctors gain better insight about a patient's medical condition. The paper details different interactive features of the tool which offer potential to practice evidence-based medicine and advance patient diagnosis practices. The healthcare industry is a constantly evolving domain. Research from this domain is often translated into better understanding of different medical conditions. This new knowledge often contributes towards improved diagnosis and treatment solutions for patients. But the healthcare industry lags behind to seek immediate benefits of the new knowledge as it still adheres to the traditional paper-based approach to keep track of medical records. However recently we notice a drive that promotes a transition towards electronic health record (EHR). An EHR stores patient medical records in digital format and offers potential to replace the paper health records. Earlier attempts of an EHR replicated the paper layout on the screen, representation of medical history of a patient in a graphical time-series format, interactive visualization with 2D/3D generated images from an imaging device. But an EHR can be much more than just an 'electronic view' of the paper record or a collection of images from an imaging device. In this paper, we present an EHR called 'Computational Diagnostic Tool', that provides a novel computational approach to look at patient medical data. The developed EHR system is knowledge driven and acts as clinical decision support tool. The EHR tool provides two visual views of the medical data. Dynamic interaction with data is supported to help doctors practice evidence-based decisions and make judicious

  7. A computational approach for identifying pathogenicity islands in prokaryotic genomes

    Directory of Open Access Journals (Sweden)

    Oh Tae Kwang

    2005-07-01

    Full Text Available Abstract Background Pathogenicity islands (PAIs, distinct genomic segments of pathogens encoding virulence factors, represent a subgroup of genomic islands (GIs that have been acquired by horizontal gene transfer event. Up to now, computational approaches for identifying PAIs have been focused on the detection of genomic regions which only differ from the rest of the genome in their base composition and codon usage. These approaches often lead to the identification of genomic islands, rather than PAIs. Results We present a computational method for detecting potential PAIs in complete prokaryotic genomes by combining sequence similarities and abnormalities in genomic composition. We first collected 207 GenBank accessions containing either part or all of the reported PAI loci. In sequenced genomes, strips of PAI-homologs were defined based on the proximity of the homologs of genes in the same PAI accession. An algorithm reminiscent of sequence-assembly procedure was then devised to merge overlapping or adjacent genomic strips into a large genomic region. Among the defined genomic regions, PAI-like regions were identified by the presence of homolog(s of virulence genes. Also, GIs were postulated by calculating G+C content anomalies and codon usage bias. Of 148 prokaryotic genomes examined, 23 pathogenic and 6 non-pathogenic bacteria contained 77 candidate PAIs that partly or entirely overlap GIs. Conclusion Supporting the validity of our method, included in the list of candidate PAIs were thirty four PAIs previously identified from genome sequencing papers. Furthermore, in some instances, our method was able to detect entire PAIs for those only partial sequences are available. Our method was proven to be an efficient method for demarcating the potential PAIs in our study. Also, the function(s and origin(s of a candidate PAI can be inferred by investigating the PAI queries comprising it. Identification and analysis of potential PAIs in prokaryotic

  8. A computational approach for deciphering the organization of glycosaminoglycans.

    Directory of Open Access Journals (Sweden)

    Jean L Spencer

    Full Text Available BACKGROUND: Increasing evidence has revealed important roles for complex glycans as mediators of normal and pathological processes. Glycosaminoglycans are a class of glycans that bind and regulate the function of a wide array of proteins at the cell-extracellular matrix interface. The specific sequence and chemical organization of these polymers likely define function; however, identification of the structure-function relationships of glycosaminoglycans has been met with challenges associated with the unique level of complexity and the nontemplate-driven biosynthesis of these biopolymers. METHODOLOGY/PRINCIPAL FINDINGS: To address these challenges, we have devised a computational approach to predict fine structure and patterns of domain organization of the specific glycosaminoglycan, heparan sulfate (HS. Using chemical composition data obtained after complete and partial digestion of mixtures of HS chains with specific degradative enzymes, the computational analysis produces populations of theoretical HS chains with structures that meet both biosynthesis and enzyme degradation rules. The model performs these operations through a modular format consisting of input/output sections and three routines called chainmaker, chainbreaker, and chainsorter. We applied this methodology to analyze HS preparations isolated from pulmonary fibroblasts and epithelial cells. Significant differences in the general organization of these two HS preparations were observed, with HS from epithelial cells having a greater frequency of highly sulfated domains. Epithelial HS also showed a higher density of specific HS domains that have been associated with inhibition of neutrophil elastase. Experimental analysis of elastase inhibition was consistent with the model predictions and demonstrated that HS from epithelial cells had greater inhibitory activity than HS from fibroblasts. CONCLUSIONS/SIGNIFICANCE: This model establishes the conceptual framework for a new class of

  9. A NEW APPROACH TOWARDS INTEGRATED CLOUD COMPUTING ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    Niloofar Khanghahi

    2014-03-01

    Full Text Available Today across various businesses, administrative and senior managers seeking for new technologies and approaches in which they can utilize it, more easy and affordable and thereby rise up their competitive profit and utility. Information Communications and Technology (ICT is no exception from this principle. Cloud computing concept and technology and its inherent advantages has created a new ecosystem in the world of computing and is driving ICT industry one step forward. This technology can play an important role in an organization’s durability and IT strategies. Nowadays, due to progress and global popularity of cloud environments, many organizations moving to cloud and some well-known IT solution providers such as IBM and Oracle have introduced specific architecture to be deployed for cloud environment. On the other hand, using of IT Frameworks can be the best way for integrated business processes and other different processes. The purpose of this paper is to provide a novel architecture for cloud environment, based on recent best practices and frameworks and other cloud reference architecture. Meanwhile, a new service model has been introduced in this proposed architecture. This architecture is finally compared with little other architecture in a form of statistical graphs to show its benefits.

  10. Cloud computing approaches for prediction of ligand binding poses and pathways.

    Science.gov (United States)

    Lawrenz, Morgan; Shukla, Diwakar; Pande, Vijay S

    2015-01-22

    We describe an innovative protocol for ab initio prediction of ligand crystallographic binding poses and highly effective analysis of large datasets generated for protein-ligand dynamics. We include a procedure for setup and performance of distributed molecular dynamics simulations on cloud computing architectures, a model for efficient analysis of simulation data, and a metric for evaluation of model convergence. We give accurate binding pose predictions for five ligands ranging in affinity from 7 nM to > 200 μM for the immunophilin protein FKBP12, for expedited results in cases where experimental structures are difficult to produce. Our approach goes beyond single, low energy ligand poses to give quantitative kinetic information that can inform protein engineering and ligand design.

  11. Towards scalable quantum communication and computation: Novel approaches and realizations

    Science.gov (United States)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as

  12. Computational approaches to protein inference in shotgun proteomics.

    Science.gov (United States)

    Li, Yong Fuga; Radivojac, Predrag

    2012-01-01

    Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1) assigning experimental tandem mass spectra to peptides derived from a protein database, and (2) mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programming and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area. PMID:23176300

  13. Computational approaches to protein inference in shotgun proteomics

    Directory of Open Access Journals (Sweden)

    Li Yong

    2012-11-01

    Full Text Available Abstract Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1 assigning experimental tandem mass spectra to peptides derived from a protein database, and (2 mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programing and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area.

  14. Computer-Aided Approaches for Targeting HIVgp41

    Directory of Open Access Journals (Sweden)

    William J. Allen

    2012-08-01

    Full Text Available Virus-cell fusion is the primary means by which the human immunodeficiency virus-1 (HIV delivers its genetic material into the human T-cell host. Fusion is mediated in large part by the viral glycoprotein 41 (gp41 which advances through four distinct conformational states: (i native, (ii pre-hairpin intermediate, (iii fusion active (fusogenic, and (iv post-fusion. The pre-hairpin intermediate is a particularly attractive step for therapeutic intervention given that gp41 N-terminal heptad repeat (NHR and C‑terminal heptad repeat (CHR domains are transiently exposed prior to the formation of a six-helix bundle required for fusion. Most peptide-based inhibitors, including the FDA‑approved drug T20, target the intermediate and there are significant efforts to develop small molecule alternatives. Here, we review current approaches to studying interactions of inhibitors with gp41 with an emphasis on atomic-level computer modeling methods including molecular dynamics, free energy analysis, and docking. Atomistic modeling yields a unique level of structural and energetic detail, complementary to experimental approaches, which will be important for the design of improved next generation anti-HIV drugs.

  15. A computational approach for prediction of donor splice sites with improved accuracy.

    Science.gov (United States)

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R; Wahi, S D

    2016-09-01

    Identification of splice sites is important due to their key role in predicting the exon-intron structure of protein coding genes. Though several approaches have been developed for the prediction of splice sites, further improvement in the prediction accuracy will help predict gene structure more accurately. This paper presents a computational approach for prediction of donor splice sites with higher accuracy. In this approach, true and false splice sites were first encoded into numeric vectors and then used as input in artificial neural network (ANN), support vector machine (SVM) and random forest (RF) for prediction. ANN and SVM were found to perform equally and better than RF, while tested on HS3D and NN269 datasets. Further, the performance of ANN, SVM and RF were analyzed by using an independent test set of 50 genes and found that the prediction accuracy of ANN was higher than that of SVM and RF. All the predictors achieved higher accuracy while compared with the existing methods like NNsplice, MEM, MDD, WMM, MM1, FSPLICE, GeneID and ASSP, using the independent test set. We have also developed an online prediction server (PreDOSS) available at http://cabgrid.res.in:8080/predoss, for prediction of donor splice sites using the proposed approach. PMID:27302911

  16. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    Science.gov (United States)

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  17. An evolutionary computation approach to examine functional brain plasticity

    Directory of Open Access Journals (Sweden)

    Arnab eRoy

    2016-04-01

    Full Text Available One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN and the executive control network (ECN during recovery from traumatic brain injury (TBI; the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in

  18. Human Computer Interaction Approach in Developing Customer Relationship Management

    Directory of Open Access Journals (Sweden)

    Mohd H.N.M. Nasir

    2008-01-01

    Full Text Available Problem statement: Many published studies have found that more than 50% of Customer Relationship Management (CRM system implementations have failed due to the failure of system usability and does not fulfilled user expectation. This study presented the issues that contributed to the failures of CRM system and proposed a prototype of CRM system developed using Human Computer Interaction approaches in order to resolve the identified issues. Approach: In order to capture the users' requirements, a single in-depth case study of a multinational company was chosen in this research, in which the background, current conditions and environmental interactions were observed, recorded and analyzed for stages of patterns in relation to internal and external influences. Some techniques of blended data gathering which are interviews, naturalistic observation and studying user documentation were employed and then the prototype of CRM system was developed which incorporated User-Centered Design (UCD approach, Hierarchical Task Analysis (HTA, metaphor and identification of users' behaviors and characteristics. The implementation of these techniques, were then measured in terms of usability. Results: Based on the usability testing conducted, the results showed that most of the users agreed that the system is comfortable to work with by taking the quality attributes of learnability, memorizeablity, utility, sortability, font, visualization, user metaphor, information easy view and color as measurement parameters. Conclusions/Recommendations: By combining all these techniques, a comfort level for the users that leads to user satisfaction and higher usability degree can be achieved in a proposed CRM system. Thus, it is important that the companies should put usability quality attribute into a consideration before developing or procuring CRM system to ensure the implementation successfulness of the CRM system.

  19. Benchmark ab initio thermochemistry of the isomers of diimide, $N_{2}H_2$, using accurate computed structures and anharmonic force fields

    CERN Document Server

    Martin, J M L; Martin, Jan M.L.; Taylor, Peter R.

    1999-01-01

    A benchmark ab initio study on the thermochemistry of the trans-HNNH, cis-HNNH, and H$_2$NN isomers of diazene has been carried out using the CCSD(T) coupled cluster method, basis sets as large as $[7s6p5d4f3g2h/5s4p3d2f1g]$, and extrapolations towards the 1-particle basis set limit. The effects on inner-shell correlation and of anharmonicity in the zero-point energy were taken into account: accurate geometries and anharmonic force fields were thus obtained as by-products. Our best computed $\\Delta H^\\circ_{f,0}$ for trans-HNNH, 49.2 \\pm 0.3 kcal/mol, is in very good agreement with a recent experimental lower limit of 48.8 \\pm 0.5 kcal/mol. CCSD(T) basis set limit values for the isomerization energies at 0 K are 5.2 \\pm 0.2 kcal/mol (cis-trans) and 24.1 \\pm 0.2 kcal/mol (iso-trans). Our best computed geometry for trans-HNNH, $r_e$(NN)=1.2468 Å, $r_e$(NH)=1.0283 Å, and rotational constants of trans-HNNH to within better than 0.1 %. The rotation-vibration spectra of both cis-HNNH and H$_2$NN are dominated by ...

  20. Efficient and accurate modeling of multi-wavelength propagation in SOAs: a generalized coupled-mode approach

    CERN Document Server

    Antonelli, Cristian; Li, Wangzhe; Coldren, Larry

    2015-01-01

    We present a model for multi-wavelength mixing in semiconductor optical amplifiers (SOAs) based on coupled-mode equations. The proposed model applies to all kinds of SOA structures, takes into account the longitudinal dependence of carrier density caused by saturation, it accommodates an arbitrary functional dependencies of the material gain and carrier recombination rate on the local value of carrier density, and is computationally more efficient by orders of magnitude as compared with the standard full model based on space-time equations. We apply the coupled-mode equations model to a recently demonstrated phase-sensitive amplifier based on an integrated SOA and prove its results to be consistent with the experimental data. The accuracy of the proposed model is certified by means of a meticulous comparison with the results obtained by integrating the space-time equations.

  1. A Representation-Theoretic Approach to Reversible Computation with Applications

    DEFF Research Database (Denmark)

    Maniotis, Andreas Milton

    Reversible computing is a sub-discipline of computer science that helps to understand the foundations of the interplay between physics, algebra, and logic in the context of computation. Its subjects of study are computational devices and abstract models of computation that satisfy the constraint......, there is still no uniform and consistent theory that is general in the sense of giving a model-independent account to the field....

  2. Performance Analysis of Daubechies Wavelet and Differential Pulse Code Modulation Based Multiple Neural Networks Approach for Accurate Compression of Images

    Directory of Open Access Journals (Sweden)

    S.Sridhar

    2013-09-01

    Full Text Available Large Images in general contain huge quantity of data demanding the invention of highly efficient hybrid methods of image compression systems involving various hybrid techniques. We proposed and implemented a Daubechies wavelet transform and Differential Pulse Code Modulation (DPCM based multiple neural network hybrid model for image encoding and decoding operations combining the advantages of wavelets, neural networks and DPCM because, wavelet transforms are set of mathematical functions that established their viability in the areas of image compression owing to the computational simplicity involved in their implementation, Artificial neural networks can generalize inputs even on untrained data owing to their massive parallel architectures and Differential Pulse Code Modulation reduces redundancy based on the predicted sample values. Initially the input image is subjected to two level decomposition using Daubechies family wavelet filters generating high-scale low frequency approximation coefficients A2 and high frequency detail coefficients H2, V2, D2, H1, V1 and, D1 of multiple resolutions resembling different frequency bands. Scalar quantization and Huffman encoding schemes are used for compressing different sub bands based on their statistical properties i.e the low frequency band approximation coefficients are compressed by the DPCM while the high frequency band coefficients are compressed with neural networks. Empirical analysis and objective fidelity metrics calculation is performed and tabulated for analysis.

  3. Cone beam computed tomography: An accurate imaging technique in comparison with orthogonal portal imaging in intensity-modulated radiotherapy for prostate cancer

    Directory of Open Access Journals (Sweden)

    Om Prakash Gurjar

    2016-03-01

    Full Text Available Purpose: Various factors cause geometric uncertainties during prostate radiotherapy, including interfractional and intrafractional patient motions, organ motion, and daily setup errors. This may lead to increased normal tissue complications when a high dose to the prostate is administered. More-accurate treatment delivery is possible with daily imaging and localization of the prostate. This study aims to measure the shift of the prostate by using kilovoltage (kV cone beam computed tomography (CBCT after position verification by kV orthogonal portal imaging (OPI.Methods: Position verification in 10 patients with prostate cancer was performed by using OPI followed by CBCT before treatment delivery in 25 sessions per patient. In each session, OPI was performed by using an on-board imaging (OBI system and pelvic bone-to-pelvic bone matching was performed. After applying the noted shift by using OPI, CBCT was performed by using the OBI system and prostate-to-prostate matching was performed. The isocenter shifts along all three translational directions in both techniques were combined into a three-dimensional (3-D iso-displacement vector (IDV.Results: The mean (SD IDV (in centimeters calculated during the 250 imaging sessions was 0.931 (0.598, median 0.825 for OPI and 0.515 (336, median 0.43 for CBCT, p-value was less than 0.0001 which shows extremely statistical significant difference.Conclusion: Even after bone-to-bone matching by using OPI, a significant shift in prostate was observed on CBCT. This study concludes that imaging with CBCT provides a more accurate prostate localization than the OPI technique. Hence, CBCT should be chosen as the preferred imaging technique.

  4. Quantum dynamics of two quantum dots coupled through localized plasmons: An intuitive and accurate quantum optics approach using quasinormal modes

    Science.gov (United States)

    Ge, Rong-Chun; Hughes, Stephen

    2015-11-01

    We study the quantum dynamics of two quantum dots (QDs) or artificial atoms coupled through the fundamental localized plasmon of a gold nanorod resonator. We derive an intuitive and efficient time-local master equation, in which the effect of the metal nanorod is taken into consideration self-consistently using a quasinormal mode (QNM) expansion technique of the photon Green function. Our efficient QNM technique offers an alternative and more powerful approach over the standard Jaynes-Cummings model, where the radiative decay, nonradiative decay, and spectral reshaping effect of the electromagnetic environment is rigorously included in a clear and transparent way. We also show how one can use our approach to compliment the approximate Jaynes-Cummings model in certain spatial regimes where it is deemed to be valid. We then present a study of the quantum dynamics and photoluminescence spectra of the two plasmon-coupled QDs. We first explore the non-Markovian regime, which is found to be important only on the ultrashort time scale of the plasmon mode which is about 40 fs. For the field free evolution case of excited QDs near the nanorod, we demonstrate how spatially separated QDs can be effectively coupled through the plasmon resonance and we show how frequencies away from the plasmon resonance can be more effective for coherently coupling the QDs. Despite the strong inherent dissipation of gold nanoresonators, we show that qubit entanglements as large as 0.7 can be achieved from an initially separate state, which has been limited to less than 0.5 in previous work for weakly coupled reservoirs. We also study the superradiance and subradiance decay dynamics of the QD pair. Finally, we investigate the rich quantum dynamics of QDs that are incoherently pumped, and study the polarization dependent behavior of the emitted photoluminescence spectrum where a double-resonance structure is observed due to the strong photon exchange interactions. Our general quantum plasmonics

  5. Computational methods for reactive transport modeling: A Gibbs energy minimization approach for multiphase equilibrium calculations

    Science.gov (United States)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg

    2016-02-01

    We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.

  6. Applying a cloud computing approach to storage architectures for spacecraft

    Science.gov (United States)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  7. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection.

    Science.gov (United States)

    Ding, Hong; Dwaraknath, Shyam S; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A

    2016-05-25

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available. PMID:27145398

  8. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A.

    2016-05-25

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.

  9. A Near-Term Quantum Computing Approach for Hard Computational Problems in Space Exploration

    CERN Document Server

    Smelyanskiy, Vadim N; Knysh, Sergey I; Williams, Colin P; Johnson, Mark W; Thom, Murray C; Macready, William G; Pudenz, Kristen L

    2012-01-01

    In this article, we show how to map a sampling of the hardest artificial intelligence problems in space exploration onto equivalent Ising models that then can be attacked using quantum annealing implemented in D-Wave machine. We overview the existing results as well as propose new Ising model implementations for quantum annealing. We review supervised and unsupervised learning algorithms for classification and clustering with applications to feature identification and anomaly detection. We introduce algorithms for data fusion and image matching for remote sensing applications. We overview planning problems for space exploration mission applications and algorithms for diagnostics and recovery with applications to deep space missions. We describe combinatorial optimization algorithms for task assignment in the context of autonomous unmanned exploration. Finally, we discuss the ways to circumvent the limitation of the Ising mapping using a "blackbox" approach based on ideas from probabilistic computing. In this ...

  10. A MOBILE COMPUTING TECHNOLOGY FORESIGHT STUDY WITH SCENARIO PLANNING APPROACH

    Directory of Open Access Journals (Sweden)

    Wei-Hsiu Weng

    2015-09-01

    Full Text Available Although the importance of mobile computing is gradually being recognized, mobile computing technology development and adoption have not been clearly realized. This paper focuses on the technology planning strategy for organizations that have an interest in developing or adopting mobile computing technology. By using scenario analysis, a technology planning strategy is constructed. In this study, thirty mobile computing technologies are classified into six groups, and the importance and risk factors of these technologies are then evaluated under two possible scenarios. The main research findings include the discovery that most mobile computing software technologies are rated high to medium in importance and low risk in both scenarios, and that scenario changes will have less impact on mobile computing devices and on mobile computing software technologies. These results provide a reference for organizations interested in developing or adopting mobile computing technology.

  11. A MOBILE COMPUTING TECHNOLOGY FORESIGHT STUDY WITH SCENARIO PLANNING APPROACH

    OpenAIRE

    Wei-Hsiu Weng; Woo-Tsong Lin

    2015-01-01

    Although the importance of mobile computing is gradually being recognized, mobile computing technology development and adoption have not been clearly realized. This paper focuses on the technology planning strategy for organizations that have an interest in developing or adopting mobile computing technology. By using scenario analysis, a technology planning strategy is constructed. In this study, thirty mobile computing technologies are classified into six groups, and the importance and risk ...

  12. An Educational Approach to Computationally Modeling Dynamical Systems

    Science.gov (United States)

    Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl

    2009-01-01

    Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…

  13. Differentiating Information Skills and Computer Skills: A Factor Analytic Approach

    OpenAIRE

    Pask, Judith M.; Saunders, E. Stewart

    2004-01-01

    A basic tenet of information literacy programs is that the skills needed to use computers and the skills needed to find and evaluate information are two separate sets of skills. Outside the library this is not always the view. The claim is sometimes made that information skills are acquired by learning computer skills. All that is needed is a computer lab and someone to teach computer skills. This study uses data from a survey of computer and information skills to determine whether or not...

  14. Human Computation An Integrated Approach to Learning from the Crowd

    CERN Document Server

    Law, Edith

    2011-01-01

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy

  15. A Monomial Chaos Approach for Efficient Uncertainty Quantification in Computational Fluid Dynamics

    NARCIS (Netherlands)

    Witteveen, J.A.S.; Bijl, H.

    2006-01-01

    A monomial chaos approach is proposed for efficient uncertainty quantification in nonlinear computational problems. Propagating uncertainty through nonlinear equations can still be computationally intensive for existing uncertainty quantification methods. It usually results in a set of nonlinear equ

  16. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    Science.gov (United States)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  17. A computational intelligence approach to the Mars Precision Landing problem

    Science.gov (United States)

    Birge, Brian Kent, III

    Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over

  18. A Soft Computing Approach to Kidney Diseases Evaluation.

    Science.gov (United States)

    Neves, José; Martins, M Rosário; Vilhena, João; Neves, João; Gomes, Sabino; Abelha, António; Machado, José; Vicente, Henrique

    2015-10-01

    Kidney renal failure means that one's kidney have unexpectedly stopped functioning, i.e., once chronic disease is exposed, the presence or degree of kidney dysfunction and its progression must be assessed, and the underlying syndrome has to be diagnosed. Although the patient's history and physical examination may denote good practice, some key information has to be obtained from valuation of the glomerular filtration rate, and the analysis of serum biomarkers. Indeed, chronic kidney sickness depicts anomalous kidney function and/or its makeup, i.e., there is evidence that treatment may avoid or delay its progression, either by reducing and prevent the development of some associated complications, namely hypertension, obesity, diabetes mellitus, and cardiovascular complications. Acute kidney injury appears abruptly, with a rapid deterioration of the renal function, but is often reversible if it is recognized early and treated promptly. In both situations, i.e., acute kidney injury and chronic kidney disease, an early intervention can significantly improve the prognosis. The assessment of these pathologies is therefore mandatory, although it is hard to do it with traditional methodologies and existing tools for problem solving. Hence, in this work, we will focus on the development of a hybrid decision support system, in terms of its knowledge representation and reasoning procedures based on Logic Programming, that will allow one to consider incomplete, unknown, and even contradictory information, complemented with an approach to computing centered on Artificial Neural Networks, in order to weigh the Degree-of-Confidence that one has on such a happening. The present study involved 558 patients with an age average of 51.7 years and the chronic kidney disease was observed in 175 cases. The dataset comprise twenty four variables, grouped into five main categories. The proposed model showed a good performance in the diagnosis of chronic kidney disease, since the

  19. Invariant Image Watermarking Using Accurate Zernike Moments

    Directory of Open Access Journals (Sweden)

    Ismail A. Ismail

    2010-01-01

    Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.

  20. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    International Nuclear Information System (INIS)

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  1. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    OpenAIRE

    Vladimiras Dolgopolovas; Valentina Dagienė; Saulius Minkevičius; Leonidas Sakalauskas

    2015-01-01

    The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same ti...

  2. Computer Science Contests for Secondary School Students: Approaches to Classification

    Directory of Open Access Journals (Sweden)

    Wolfgang POHL

    2006-04-01

    Full Text Available The International Olympiad in Informatics currently provides a model which is imitated by the majority of contests for secondary school students in Informatics or Computer Science. However, the IOI model can be criticized, and alternative contest models exist. To support the discussion about contests in Computer Science, several dimensions for characterizing and classifying contests are suggested.

  3. Gesture Recognition by Computer Vision: An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  4. Computational approaches and metrics required for formulating biologically realistic nanomaterial pharmacokinetic models

    International Nuclear Information System (INIS)

    The field of nanomaterial pharmacokinetics is in its infancy, with major advances largely restricted by a lack of biologically relevant metrics, fundamental differences between particles and small molecules of organic chemicals and drugs relative to biological processes involved in disposition, a scarcity of sufficiently rich and characterized in vivo data and a lack of computational approaches to integrating nanomaterial properties to biological endpoints. A central concept that links nanomaterial properties to biological disposition, in addition to their colloidal properties, is the tendency to form a biocorona which modulates biological interactions including cellular uptake and biodistribution. Pharmacokinetic models must take this crucial process into consideration to accurately predict in vivo disposition, especially when extrapolating from laboratory animals to humans since allometric principles may not be applicable. The dynamics of corona formation, which modulates biological interactions including cellular uptake and biodistribution, is thereby a crucial process involved in the rate and extent of biodisposition. The challenge will be to develop a quantitative metric that characterizes a nanoparticle's surface adsorption forces that are important for predicting biocorona dynamics. These types of integrative quantitative approaches discussed in this paper for the dynamics of corona formation must be developed before realistic engineered nanomaterial risk assessment can be accomplished. (paper)

  5. Computational approaches and metrics required for formulating biologically realistic nanomaterial pharmacokinetic models

    Science.gov (United States)

    Riviere, Jim E.; Scoglio, Caterina; Sahneh, Faryad D.; Monteiro-Riviere, Nancy A.

    2013-01-01

    The field of nanomaterial pharmacokinetics is in its infancy, with major advances largely restricted by a lack of biologically relevant metrics, fundamental differences between particles and small molecules of organic chemicals and drugs relative to biological processes involved in disposition, a scarcity of sufficiently rich and characterized in vivo data and a lack of computational approaches to integrating nanomaterial properties to biological endpoints. A central concept that links nanomaterial properties to biological disposition, in addition to their colloidal properties, is the tendency to form a biocorona which modulates biological interactions including cellular uptake and biodistribution. Pharmacokinetic models must take this crucial process into consideration to accurately predict in vivo disposition, especially when extrapolating from laboratory animals to humans since allometric principles may not be applicable. The dynamics of corona formation, which modulates biological interactions including cellular uptake and biodistribution, is thereby a crucial process involved in the rate and extent of biodisposition. The challenge will be to develop a quantitative metric that characterizes a nanoparticle's surface adsorption forces that are important for predicting biocorona dynamics. These types of integrative quantitative approaches discussed in this paper for the dynamics of corona formation must be developed before realistic engineered nanomaterial risk assessment can be accomplished.

  6. Development of Computer Science Disciplines - A Social Network Analysis Approach

    CERN Document Server

    Pham, Manh Cuong; Jarke, Matthias

    2011-01-01

    In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and workshop proceedings. That results in an imprecise and incomplete analysis of the computer science knowledge. This paper presents an analysis on the computer science knowledge network constructed from all types of publications, aiming at providing a complete view of computer science research. Based on the combination of two important digital libraries (DBLP and CiteSeerX), we study the knowledge network created at journal/conference level using citation linkage, to identify the development of sub-disciplines. We investiga...

  7. The role of chemistry and pH of solid surfaces for specific adsorption of biomolecules in solution—accurate computational models and experiment

    International Nuclear Information System (INIS)

    Adsorption of biomolecules and polymers to inorganic nanostructures plays a major role in the design of novel materials and therapeutics. The behavior of flexible molecules on solid surfaces at a scale of 1–1000 nm remains difficult and expensive to monitor using current laboratory techniques, while playing a critical role in energy conversion and composite materials as well as in understanding the origin of diseases. Approaches to implement key surface features and pH in molecular models of solids are explained, and distinct mechanisms of peptide recognition on metal nanostructures, silica and apatite surfaces in solution are described as illustrative examples. The influence of surface energies, specific surface features and protonation states on the structure of aqueous interfaces and selective biomolecular adsorption is found to be critical, comparable to the well-known influence of the charge state and pH of proteins and surfactants on their conformations and assembly. The representation of such details in molecular models according to experimental data and available chemical knowledge enables accurate simulations of unknown complex interfaces in atomic resolution in quantitative agreement with independent experimental measurements. In this context, the benefits of a uniform force field for all material classes and of a mineral surface structure database are discussed. (paper)

  8. Accurate Finite Difference Algorithms

    Science.gov (United States)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  9. Neutron stimulated emission computed tomography: a Monte Carlo simulation approach

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, A C [Department of Biomedical Engineering, Duke University, 136 Hudson Hall, Durham, NC 27708 (United States); Harrawood, B P [Duke Advance Imaging Labs, Department of Radiology, 2424 Erwin Rd, Suite 302, Durham, NC 27705 (United States); Bender, J E [Department of Biomedical Engineering, Duke University, 136 Hudson Hall, Durham, NC 27708 (United States); Tourassi, G D [Duke Advance Imaging Labs, Department of Radiology, 2424 Erwin Rd, Suite 302, Durham, NC 27705 (United States); Kapadia, A J [Department of Biomedical Engineering, Duke University, 136 Hudson Hall, Durham, NC 27708 (United States)

    2007-10-21

    A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in

  10. What Computational Approaches Should be Taught for Physics?

    Science.gov (United States)

    Landau, Rubin

    2005-03-01

    The standard Computational Physics courses are designed for upper-level physics majors who already have some computational skills. We believe that it is important for first-year physics students to learn modern computing techniques that will be useful throughout their college careers, even before they have learned the math and science required for Computational Physics. To teach such Introductory Scientific Computing courses requires that some choices be made as to what subjects and computer languages wil be taught. Our survey of colleagues active in Computational Physics and Physics Education show no predominant choice, with strong positions taken for the compiled languages Java, C, C++ and Fortran90, as well as for problem-solving environments like Maple and Mathematica. Over the last seven years we have developed an Introductory course and have written up those courses as text books for others to use. We will describe our model of using both a problem-solving environment and a compiled language. The developed materials are available in both Maple and Mathaematica, and Java and Fortran90ootnotetextPrinceton University Press, to be published; www.physics.orst.edu/˜rubin/IntroBook/.

  11. Mapping Agricultural Fields in Sub-Saharan Africa with a Computer Vision Approach

    Science.gov (United States)

    Debats, S. R.; Luo, D.; Estes, L. D.; Fuchs, T.; Caylor, K. K.

    2014-12-01

    Sub-Saharan Africa is an important focus for food security research, because it is experiencing unprecedented population growth, agricultural activities are largely dominated by smallholder production, and the region is already home to 25% of the world's undernourished. One of the greatest challenges to monitoring and improving food security in this region is obtaining an accurate accounting of the spatial distribution of agriculture. Households are the primary units of agricultural production in smallholder communities and typically rely on small fields of less than 2 hectares. Field sizes are directly related to household crop productivity, management choices, and adoption of new technologies. As population and agriculture expand, it becomes increasingly important to understand both the distribution of field sizes as well as how agricultural communities are spatially embedded in the landscape. In addition, household surveys, a common tool for tracking agricultural productivity in Sub-Saharan Africa, would greatly benefit from spatially explicit accounting of fields. Current gridded land cover data sets do not provide information on individual agricultural fields or the distribution of field sizes. Therefore, we employ cutting edge approaches from the field of computer vision to map fields across Sub-Saharan Africa, including semantic segmentation, discriminative classifiers, and automatic feature selection. Our approach aims to not only improve the binary classification accuracy of cropland, but also to isolate distinct fields, thereby capturing crucial information on size and geometry. Our research focuses on the development of descriptive features across scales to increase the accuracy and geographic range of our computer vision algorithm. Relevant data sets include high-resolution remote sensing imagery and Landsat (30-m) multi-spectral imagery. Training data for field boundaries is derived from hand-digitized data sets as well as crowdsourcing.

  12. A Human-Centred Tangible approach to learning Computational Thinking

    Directory of Open Access Journals (Sweden)

    Tommaso Turchi

    2016-08-01

    Full Text Available Computational Thinking has recently become a focus of many teaching and research domains; it encapsulates those thinking skills integral to solving complex problems using a computer, thus being widely applicable in our society. It is influencing research across many disciplines and also coming into the limelight of education, mostly thanks to public initiatives such as the Hour of Code. In this paper we present our arguments for promoting Computational Thinking in education through the Human-centred paradigm of Tangible End-User Development, namely by exploiting objects whose interactions with the physical environment are mapped to digital actions performed on the system.

  13. An introduction to statistical computing a simulation-based approach

    CERN Document Server

    Voss, Jochen

    2014-01-01

    A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems.  Sampling-based simulation techniques are now an invaluable tool for exploring statistical models.  This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods.  It also includes some advanced met

  14. Loss tolerant one-way quantum computation -- a horticultural approach

    CERN Document Server

    Varnava, M; Rudolph, T; Varnava, Michael; Browne, Daniel E.; Rudolph, Terry

    2005-01-01

    We introduce a scheme for fault tolerantly dealing with losses in cluster state computation that can tolerate up to 50% qubit loss. This is achieved passively - no coherent measurements or coherent correction is required. We then use this procedure within a specific linear optical quantum computation proposal to show that: (i) given perfect sources, detector inefficiencies of up to 50% can be tolerated and (ii) given perfect detectors, the purity of the photon source (overlap of the photonic wavefunction with the desired single mode) need only be greater than 66.6% for efficient computation to be possible.

  15. Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach

    Science.gov (United States)

    Parent, Bernard; Macheret, Sergey O.; Shneider, Mikhail N.

    2015-11-01

    Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant. This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.

  16. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  17. Photonic reservoir computing: a new approach to optical information processing

    OpenAIRE

    Vandoorne, Kristof; Fiers, Martin; Verstraeten, David; Schrauwen, Benjamin; Dambre, Joni; Bienstman, Peter

    2010-01-01

    Despite ever increasing computational power, recognition and classification problems remain challenging to solve. Recently advances have been made by the introduction of the new concept of reservoir computing. This is a methodology coming from the field of machine learning and neural networks and has been successfully used in several pattern classification problems, like speech and image recognition. The implementations have so far been in software, limiting their speed and power efficiency. ...

  18. AVES: A Computer Cluster System approach for INTEGRAL Scientific Analysis

    Science.gov (United States)

    Federici, M.; Martino, B. L.; Natalucci, L.; Umbertini, P.

    The AVES computing system, based on an "Cluster" architecture is a fully integrated, low cost computing facility dedicated to the archiving and analysis of the INTEGRAL data. AVES is a modular system that uses the software resource manager (SLURM) and allows almost unlimited expandibility (65,536 nodes and hundreds of thousands of processors); actually is composed by 30 Personal Computers with Quad-Cores CPU able to reach the computing power of 300 Giga Flops (300x10{9} Floating point Operations Per Second), with 120 GB of RAM and 7.5 Tera Bytes (TB) of storage memory in UFS configuration plus 6 TB for users area. AVES was designed and built to solve growing problems raised from the analysis of the large data amount accumulated by the INTEGRAL mission (actually about 9 TB) and due to increase every year. The used analysis software is the OSA package, distributed by the ISDC in Geneva. This is a very complex package consisting of dozens of programs that can not be converted to parallel computing. To overcome this limitation we developed a series of programs to distribute the workload analysis on the various nodes making AVES automatically divide the analysis in N jobs sent to N cores. This solution thus produces a result similar to that obtained by the parallel computing configuration. In support of this we have developed tools that allow a flexible use of the scientific software and quality control of on-line data storing. The AVES software package is constituted by about 50 specific programs. Thus the whole computing time, compared to that provided by a Personal Computer with single processor, has been enhanced up to a factor 70.

  19. A computational approach to George Boole's discovery of mathematical logic

    OpenAIRE

    Ledesma, Luis de; Pérez, Aurora; Borrajo, Daniel; Laita, Luis M.

    1997-01-01

    This paper reports a computational model of Boole's discovery of Logic as a part of Mathematics. George Boole (1815–1864) found that the symbols of Logic behaved as algebraic symbols, and he then rebuilt the whole contemporary theory of Logic by the use of methods such as the solution of algebraic equations. Study of the different historical factors that influenced this achievement has served as background for our two main contributions: a computational representation of Boole's Logic before ...

  20. AN ETHICAL ASSESSMENT OF COMPUTER ETHICS USING SCENARIO APPROACH

    OpenAIRE

    Maslin Masrom; Zuraini Ismail; Ramlah Hussein

    2010-01-01

    Ethics refers to a set of rules that define right and wrong behavior, used for moral decision making. In this case, computer ethics is one of the major issues in information technology (IT) and information system (IS). The ethical behaviour of IT students and professionals need to be studied in an attempt to reduce many unethical practices such as software piracy, hacking, and software intellectual property violations. This paper attempts to address computer-related scenarios that can be used...

  1. Collaboration in computer science: a network science approach. Part I

    OpenAIRE

    Franceschet, Massimo

    2010-01-01

    Co-authorship in publications within a discipline uncovers interesting properties of the analysed field. We represent collaboration in academic papers of computer science in terms of differently grained networks, including those sub-networks that emerge from conference and journal co-authorship only. We take advantage of the network science paraphernalia to take a picture of computer science collaboration including all papers published in the field since 1936. We investigate typical bibliomet...

  2. An Econometric Approach of Computing Competitiveness Index in Human Capital

    OpenAIRE

    Salahodjaev, Raufhon; Nazarov, Zafar

    2013-01-01

    The aim of this paper is to provide methodology of estimating one of the components (pillar) of the Global Competitiveness Index (GCI), health and primary education (HPE) pillar for not included in the Global Competitiveness Report countries using conventional econometric techniques. Specifically, using the weighted least square and bootstrapping methods, we enable to compute the HPE for two countries of the former Soviet Union, Uzbekistan and Belarus and then compare the computed...

  3. Computational Approaches for Probing the Formation of Atmospheric Molecular Clusters

    DEFF Research Database (Denmark)

    Elm, Jonas

    This thesis presents the investigation of atmospheric molecular clusters using computational methods. Previous investigations have focused on solving problems related to atmospheric nucleation, and have not been targeted at the performance of the applied methods. This thesis focuses on assessing...... the performance of computational strategies in order to identify a sturdy methodology, which should be applicable for handling various issues related to atmospheric cluster formation. Density functional theory (DFT) is applied to study individual cluster formation steps. Utilizing large test sets of numerous...

  4. Computational challenges of structure-based approaches applied to HIV.

    Science.gov (United States)

    Forli, Stefano; Olson, Arthur J

    2015-01-01

    Here, we review some of the opportunities and challenges that we face in computational modeling of HIV therapeutic targets and structural biology, both in terms of methodology development and structure-based drug design (SBDD). Computational methods have provided fundamental support to HIV research since the initial structural studies, helping to unravel details of HIV biology. Computational models have proved to be a powerful tool to analyze and understand the impact of mutations and to overcome their structural and functional influence in drug resistance. With the availability of structural data, in silico experiments have been instrumental in exploiting and improving interactions between drugs and viral targets, such as HIV protease, reverse transcriptase, and integrase. Issues such as viral target dynamics and mutational variability, as well as the role of water and estimates of binding free energy in characterizing ligand interactions, are areas of active computational research. Ever-increasing computational resources and theoretical and algorithmic advances have played a significant role in progress to date, and we envision a continually expanding role for computational methods in our understanding of HIV biology and SBDD in the future.

  5. New Approaches to Quantum Computing using Nuclear Magnetic Resonance Spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Colvin, M; Krishnan, V V

    2003-02-07

    The power of a quantum computer (QC) relies on the fundamental concept of the superposition in quantum mechanics and thus allowing an inherent large-scale parallelization of computation. In a QC, binary information embodied in a quantum system, such as spin degrees of freedom of a spin-1/2 particle forms the qubits (quantum mechanical bits), over which appropriate logical gates perform the computation. In classical computers, the basic unit of information is the bit, which can take a value of either 0 or 1. Bits are connected together by logic gates to form logic circuits to implement complex logical operations. The expansion of modern computers has been driven by the developments of faster, smaller and cheaper logic gates. As the size of the logic gates become smaller toward the level of atomic dimensions, the performance of such a system is no longer considered classical but is rather governed by quantum mechanics. Quantum computers offer the potentially superior prospect of solving computational problems that are intractable to classical computers such as efficient database searches and cryptography. A variety of algorithms have been developed recently, most notably Shor's algorithm for factorizing long numbers into prime factors in polynomial time and Grover's quantum search algorithm. The algorithms that were of only theoretical interest as recently, until several methods were proposed to build an experimental QC. These methods include, trapped ions, cavity-QED, coupled quantum dots, Josephson junctions, spin resonance transistors, linear optics and nuclear magnetic resonance. Nuclear magnetic resonance (NMR) is uniquely capable of constructing small QCs and several algorithms have been implemented successfully. NMR-QC differs from other implementations in one important way that it is not a single QC, but a statistical ensemble of them. Thus, quantum computing based on NMR is considered as ensemble quantum computing. In NMR quantum computing, the

  6. New Approaches to Quantum Computing using Nuclear Magnetic Resonance Spectroscopy

    International Nuclear Information System (INIS)

    The power of a quantum computer (QC) relies on the fundamental concept of the superposition in quantum mechanics and thus allowing an inherent large-scale parallelization of computation. In a QC, binary information embodied in a quantum system, such as spin degrees of freedom of a spin-1/2 particle forms the qubits (quantum mechanical bits), over which appropriate logical gates perform the computation. In classical computers, the basic unit of information is the bit, which can take a value of either 0 or 1. Bits are connected together by logic gates to form logic circuits to implement complex logical operations. The expansion of modern computers has been driven by the developments of faster, smaller and cheaper logic gates. As the size of the logic gates become smaller toward the level of atomic dimensions, the performance of such a system is no longer considered classical but is rather governed by quantum mechanics. Quantum computers offer the potentially superior prospect of solving computational problems that are intractable to classical computers such as efficient database searches and cryptography. A variety of algorithms have been developed recently, most notably Shor's algorithm for factorizing long numbers into prime factors in polynomial time and Grover's quantum search algorithm. The algorithms that were of only theoretical interest as recently, until several methods were proposed to build an experimental QC. These methods include, trapped ions, cavity-QED, coupled quantum dots, Josephson junctions, spin resonance transistors, linear optics and nuclear magnetic resonance. Nuclear magnetic resonance (NMR) is uniquely capable of constructing small QCs and several algorithms have been implemented successfully. NMR-QC differs from other implementations in one important way that it is not a single QC, but a statistical ensemble of them. Thus, quantum computing based on NMR is considered as ensemble quantum computing. In NMR quantum computing, the spins with

  7. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  8. Challenges and possible approaches: towards the petaflops computers

    Institute of Scientific and Technical Information of China (English)

    Depei QIAN; Danfeng ZHU

    2009-01-01

    In parallel with the R&D efforts in USA and Eu-rope, China's National High-tech R&D program has setup its goal in developing petaflops computers. Researchers and engineers world-wide are looking for appropriate methods and technologies to achieve the petaflops computer system. Based on discussion on important design issues in devel-oping the petafiops computer, this paper raises the major technological challenges including the memory wall, low power system design, interconnects, and programming sup-port, etc. Current efforts in addressing some of these chal-lenges and in pursuing possible solutions for developing the petaflops systems are presented. Several existing systems are briefly introduced as examples, including Roadrunner, Cray XT5 jaguar, Dawning 5000A/6000, and Lenovo DeepComp 7000. Architectures proposed by Chinese researchers for im-plementing the petaflops computer are also introduced. Ad-vantages of the architecture as well as the difficulties in its implementation are discussed. Finally, future research direc-tion in development of high productivity computing systems is discussed.

  9. AN ETHICAL ASSESSMENT OF COMPUTER ETHICS USING SCENARIO APPROACH

    Directory of Open Access Journals (Sweden)

    Maslin Masrom

    2010-06-01

    Full Text Available Ethics refers to a set of rules that define right and wrong behavior, used for moral decision making. In this case, computer ethics is one of the major issues in information technology (IT and information system (IS. The ethical behaviour of IT students and professionals need to be studied in an attempt to reduce many unethical practices such as software piracy, hacking, and software intellectual property violations. This paper attempts to address computer-related scenarios that can be used to examine the computer ethics. The computer-related scenario consists of a short description of an ethical situation whereby the subject of the study such as IT professionals or students, then rate the ethics of the scenario, namely attempt to identify the ethical issues involved. This paper also reviews several measures of computer ethics in different setting. The perceptions of various dimensions of ethical behaviour in IT that are related to the circumstances of the ethical scenario are also presented.

  10. Computer Mediated Learning: An Example of an Approach.

    Science.gov (United States)

    Arcavi, Abraham; Hadas, Nurit

    2000-01-01

    There are several possible approaches in which dynamic computerized environments play a significant and possibly unique role in supporting innovative learning trajectories in mathematics in general and geometry in particular. Describes an approach based on a problem situation and some experiences using it with students and teachers. (Contains 15…

  11. Computational strategies and improvements in the linear algebraic variational approach to rearrangement scattering

    Science.gov (United States)

    Schwenke, David W.; Mladenovic, Mirjana; Zhao, Meishan; Truhlar, Donald G.; Sun, Yan

    1988-01-01

    The computational steps in calculating quantum mechanical reactive scattering amplitudes by the L2 generalized Newton variational principle are discussed with emphasis on computational strategies and recent improvements that make the calculations more efficient. Special emphasis is placed on quadrature techniques, storage management strategies, use of symmetry, and boundary conditions. It is concluded that an efficient implementation of these procedures provides a powerful algorithm for the accurate solution of the Schroedinger equation for rearrangements.

  12. A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises

    OpenAIRE

    Kuada, Eric; Olesen, Henning

    2011-01-01

    This paper proposes a social network approach to the provisioning and management of cloud computing services termed Opportunistic Cloud Computing Services (OCCS), for enterprises; and presents the research issues that need to be addressed for its implementation. We hypothesise that OCCS will facilitate the adoption process of cloud computing services by enterprises. OCCS deals with the concept of enterprises taking advantage of cloud computing services to meet their business needs without hav...

  13. A Comparative Study in Dynamic Job Scheduling Approaches in Grid Computing Environment

    Directory of Open Access Journals (Sweden)

    Amr Rekaby

    2013-09-01

    Full Text Available Grid computing is one of the most interesting research areas for present and future computing strategy and methodology. The dramatic changes in the complexity of scientific applications and part of non-scientific applications increase the need for distributed systems in general and grid computing specifically. One of the main challenges in grid computing environment is the way of handling the jobs(tasks in the grid environment. Job scheduling is the activity to schedule the submitted jobs in the grid environment. There are many approaches in job scheduling in grid computing.This paper provides an experimental study of different approaches in grid computing job scheduling. The involved approaches in this paper are “4-levels/RMFF” and our previously published approach “X-Levels/XD-Binary Tree”. First of all, introduction to grid computing and job scheduling techniques is provided. Then the description of currently existing approaches will be presented. After that, experiments and provided results give a practical evaluation of these approaches from different perspectives. Conclusion of the comparative study states that overall average tasks waiting time is enhanced by approximately 30% by using the X-levels/XD-binary tree approach against 4-levels/RMFF approach

  14. Exploring polymorphism in molecular crystals with a computational approach

    NARCIS (Netherlands)

    Ende, J.A. van den

    2016-01-01

    Different crystal structures can possess different properties and therefore the control of polymorphism in molecular crystals is a goal in multiple industries, e.g. the pharmaceutical industry. Part I of this thesis is a computational study at the molecular scale of a particular solid-solid polymorp

  15. Simulation of Quantum Computation : A Deterministic Event-Based Approach

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, K. De; Raedt, H. De

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  16. R for cloud computing an approach for data scientists

    CERN Document Server

    Ohri, A

    2014-01-01

    R for Cloud Computing looks at some of the tasks performed by business analysts on the desktop (PC era)  and helps the user navigate the wealth of information in R and its 4000 packages as well as transition the same analytics using the cloud.  With this information the reader can select both cloud vendors  and the sometimes confusing cloud ecosystem as well  as the R packages that can help process the analytical tasks with minimum effort and cost, and maximum usefulness and customization. The use of Graphical User Interfaces (GUI)  and Step by Step screenshot tutorials is emphasized in this book to lessen the famous learning curve in learning R and some of the needless confusion created in cloud computing that hinders its widespread adoption. This will help you kick-start analytics on the cloud including chapters on cloud computing, R, common tasks performed in analytics, scrutiny of big data analytics, and setting up and navigating cloud providers. Readers are exposed to a breadth of cloud computing ch...

  17. Simulation of quantum computation : A deterministic event-based approach

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, K; De Raedt, H

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  18. A Computational Approach to Quantifiers as an Explanation for Some Language Impairments in Schizophrenia

    Science.gov (United States)

    Zajenkowski, Marcin; Styla, Rafal; Szymanik, Jakub

    2011-01-01

    We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only…

  19. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  20. 虚拟链路精确流模型在 AFDX 网络演算中的应用%Using an Accurate Flow Model of Virtual Link to Optimize Network Calculus Approach in AFDX

    Institute of Scientific and Technical Information of China (English)

    刘成; 周立; 屠晓杰; 王彤

    2013-01-01

    Avionics Full Duplex Switched Ethernet ( AFDX ) standardized as ARINC 664 is an upgrade from Ethernet for avionics demand .AFDX adopts the mechanisms such as Virtual Link ( VL) and traffic policing for the determinacy of communication task .Network calculus approach is a basic tool to compute the upper bound of end-to-end delay for VL in AFDX network,which give AFDX a useful method to research the real-time performance .However,the network calculus approach used a simple flow model of VL .Based on that model,the result got by network calculus approach is pessimistic .An accurate flow model of VL is adopted in the network calculus approach in this paper,and it helps the network calculus approach to get a tighter upper bound of delay for VL .%航空电子全双工交换式以太网( AFDX)是针对航空电子应用从工业交换式以太网升级而来的。其中,引入的虚拟链路( VL)和流量管制机制为网络通信任务提供了确定性的保障。网络演算是AFDX中常用的VL延时上界计算工具,为AFDX网络的实时性研究提供了理论依据。但是网络演算采用了VL简单流模型,计算得到的延时上界比较悲观。将VL精确流模型应用在网络演算工具中,并展示了基于精确流模型的网络演算能够得到更紧的延时上界计算结果。

  1. Computing pKa Values with a Mixing Hamiltonian Quantum Mechanical/Molecular Mechanical Approach.

    Science.gov (United States)

    Liu, Yang; Fan, Xiaoli; Jin, Yingdi; Hu, Xiangqian; Hu, Hao

    2013-09-10

    Accurate computation of the pKa value of a compound in solution is important but challenging. Here, a new mixing quantum mechanical/molecular mechanical (QM/MM) Hamiltonian method is developed to simulate the free-energy change associated with the protonation/deprotonation processes in solution. The mixing Hamiltonian method is designed for efficient quantum mechanical free-energy simulations by alchemically varying the nuclear potential, i.e., the nuclear charge of the transforming nucleus. In pKa calculation, the charge on the proton is varied in fraction between 0 and 1, corresponding to the fully deprotonated and protonated states, respectively. Inspired by the mixing potential QM/MM free energy simulation method developed previously [H. Hu and W. T. Yang, J. Chem. Phys. 2005, 123, 041102], this method succeeds many advantages of a large class of λ-coupled free-energy simulation methods and the linear combination of atomic potential approach. Theory and technique details of this method, along with the calculation results of the pKa of methanol and methanethiol molecules in aqueous solution, are reported. The results show satisfactory agreement with the experimental data. PMID:26592414

  2. Computing pKa Values with a Mixing Hamiltonian Quantum Mechanical/Molecular Mechanical Approach.

    Science.gov (United States)

    Liu, Yang; Fan, Xiaoli; Jin, Yingdi; Hu, Xiangqian; Hu, Hao

    2013-09-10

    Accurate computation of the pKa value of a compound in solution is important but challenging. Here, a new mixing quantum mechanical/molecular mechanical (QM/MM) Hamiltonian method is developed to simulate the free-energy change associated with the protonation/deprotonation processes in solution. The mixing Hamiltonian method is designed for efficient quantum mechanical free-energy simulations by alchemically varying the nuclear potential, i.e., the nuclear charge of the transforming nucleus. In pKa calculation, the charge on the proton is varied in fraction between 0 and 1, corresponding to the fully deprotonated and protonated states, respectively. Inspired by the mixing potential QM/MM free energy simulation method developed previously [H. Hu and W. T. Yang, J. Chem. Phys. 2005, 123, 041102], this method succeeds many advantages of a large class of λ-coupled free-energy simulation methods and the linear combination of atomic potential approach. Theory and technique details of this method, along with the calculation results of the pKa of methanol and methanethiol molecules in aqueous solution, are reported. The results show satisfactory agreement with the experimental data.

  3. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  4. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  5. Computational Model of Music Sight Reading: A Reinforcement Learning Approach

    CERN Document Server

    Yahya, Keyvan

    2010-01-01

    Although the Music Sight Reading process usually has been studied from the cognitive or neurological view points, but the computational learning methods like the Reinforcement Learning have not yet been used to modeling of such processes. In this paper with regards to essential properties of our specific problem, we consider the value function concept and will indicate that the optimum policy can be obtained by the method we offer without to be getting involved with computing of the complex value functions which are in most of cases inexact. Also, the algorithm we will offer here is somehow a PDE based algorithm which is associated with a stochastic optimization programming and we consider that in this case, this one is more applicable than the normative algorithms like temporal difference method.

  6. Analytic reconstruction approach for parallel translational computed tomography.

    Science.gov (United States)

    Kong, Huihua; Yu, Hengyong

    2015-01-01

    To develop low-cost and low-dose computed tomography (CT) scanners for developing countries, recently a parallel translational computed tomography (PTCT) is proposed, and the source and detector are translated oppositely with respect to the imaging object without a slip-ring. In this paper, we develop an analytic filtered-backprojection (FBP)-type reconstruction algorithm for two dimensional (2D) fan-beam PTCT and extend it to three dimensional (3D) cone-beam geometry in a Feldkamp-type framework. Particularly, a weighting function is constructed to deal with data redundancy for multiple translations PTCT to eliminate image artifacts. Extensive numerical simulations are performed to validate and evaluate the proposed analytic reconstruction algorithms, and the results confirm their correctness and merits. PMID:25882732

  7. Distance Based Asynchronous Recovery Approach In Mobile Computing Environment

    Directory of Open Access Journals (Sweden)

    Yogita Khatri,

    2012-06-01

    Full Text Available A mobile computing system is a distributed system in which at least one of the processes is mobile. They are constrained by lack of stable storage, low network bandwidth, mobility, frequent disconnection andlimited battery life. Checkpointing is one of the commonly used techniques to provide fault tolerance in mobile computing environment. In order to suit the mobile environment a distance based recovery schemeis proposed which is based on checkpointing and message logging. After the system recovers from failures, only the failed processes rollback and restart from their respective recent checkpoints, independent of the others. The salient feature of this scheme is to reduce the transfer and recovery cost. While the mobile host moves with in a specific range, recovery information is not moved and thus only be transferred nearby if the mobile host moves out of certain range.

  8. Variations of geometric invariant quotients for pairs, a computational approach

    OpenAIRE

    Gallardo, Patricio; Martinez-Garcia, Jesus

    2016-01-01

    We study, from a computational viewpoint, the GIT compactifications of pairs formed by a hypersurface and a hyperplane. We provide a general setting and algorithms to calculate all polarizations which give different GIT quotients, the finite number of one-parameter subgroups required to detect the lack of stability, and all maximal orbits of non stable pairs. Our algorithms have been fully implemented in Python for all dimensions and degrees. We applied our work with the case of cubic surface...

  9. Computational approaches to identify functional genetic variants in cancer genomes

    Science.gov (United States)

    Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris; Ritchie, Graham R.S.; Creixell, Pau; Karchin, Rachel; Vazquez, Miguel; Fink, J. Lynn; Kassahn, Karin S.; Pearson, John V.; Bader, Gary; Boutros, Paul C.; Muthuswamy, Lakshmi; Ouellette, B.F. Francis; Reimand, Jüri; Linding, Rune; Shibata, Tatsuhiro; Valencia, Alfonso; Butler, Adam; Dronov, Serge; Flicek, Paul; Shannon, Nick B.; Carter, Hannah; Ding, Li; Sander, Chris; Stuart, Josh M.; Stein, Lincoln D.; Lopez-Bigas, Nuria

    2014-01-01

    The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor, but only a minority drive tumor progression. We present the result of discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype. PMID:23900255

  10. A learning strategy approach for teaching novice computer programmers

    OpenAIRE

    Begley, Donald D.

    1984-01-01

    Approved for public release; distribution is unlimited The purpose of this thesis is to investigate carious learning strategies and present some suggested applications for the teaching of computer programming to Marine Corps entry level programmers. These learning strategies are used to develop a cognitively designed structure for the teaching of the software engineering process. This structure is designed so that programmers could have readily available in their thinking process modern ...

  11. Analysis of diabetic retinopathy biomarker VEGF gene by computational approaches

    OpenAIRE

    Jayashree Sadasivam; Ramesh, N; Vijayalakshmi, K.; Vinni Viridi; Shiva prasad

    2012-01-01

    Diabetic retinopathy, the most common diabetic eye disease, is caused by changes in the blood vessels of the retina which remains the major cause. It is characterized by vascular permeability and increased tissue ischemia and angiogenesis. One of the biomarker for Diabetic retinopathy has been identified as Vascular Endothelial Growth Factor ( VEGF )gene by computational analysis. VEGF is a sub-family of growth factors, the platelet-derived growth factor family of cystine-knot growth factors...

  12. Computational Approaches to Viral Evolution and Rational Vaccine Design

    Science.gov (United States)

    Bhattacharya, Tanmoy

    2006-10-01

    Viral pandemics, including HIV, are a major health concern across the world. Experimental techniques available today have uncovered a great wealth of information about how these viruses infect, grow, and cause disease; as well as how our body attempts to defend itself against them. Nevertheless, due to the high variability and fast evolution of many of these viruses, the traditional method of developing vaccines by presenting a heuristically chosen strain to the body fails and an effective intervention strategy still eludes us. A large amount of carefully curated genomic data on a number of these viruses are now available, often annotated with disease and immunological context. The availability of parallel computers has now made it possible to carry out a systematic analysis of this data within an evolutionary framework. I will describe, as an example, how computations on such data has allowed us to understand the origins and diversification of HIV, the causative agent of AIDS. On the practical side, computations on the same data is now being used to inform choice or defign of optimal vaccine strains.

  13. SOFT COMPUTING APPROACH TO PREDICT INTRACRANIAL PRESSURE VALUES

    Directory of Open Access Journals (Sweden)

    Mario Versaci

    2014-01-01

    Full Text Available The estimation and the prediction of the values related to the Intracranial Pression (ICP represents an important step for the evaluation of the compliance of the human brain, above all in those cases in which the increase of the ICP values determines high risk conditions for the patient. The regular therapy is neuro-surgical but, waiting for it, it is needed an aimed pharmacological therapy leading to an overload of the kidneys’ functionality. Thus, it becomes evident the necessity to set an effective and efficient procedure for the prediction of the ICP values with a suitable time recordings to mark the systematic pharmacological action addressed towards really necessary deliverings. The prediction techniques most commonly used in the literature, while providing a good window of time, are characterized by heavy computational complexity unappetizing to real time applications and technology transfer. In addition, ICP sampling techniques are not free from uncertainties due to affected elements (breath, heartbeat, voluntary/involuntary movement requesting the manipulation of uncertain and imprecise data. Thus, the choice of predictive techniques of soft computing type appears reasonable firstly, because it manipulates data effectively with uncertainty and /or imprecision and, secondly, for the same time frame predictive requires are duce computational load. In this study the author presents a study of the prediction of the ICP values through a two factors fuzzy time series comparing the results with more sophisticated techniques.

  14. Computational morphology a computational geometric approach to the analysis of form

    CERN Document Server

    Toussaint, GT

    1988-01-01

    Computational Geometry is a new discipline of computer science that deals with the design and analysis of algorithms for solving geometric problems. There are many areas of study in different disciplines which, while being of a geometric nature, have as their main component the extraction of a description of the shape or form of the input data. This notion is more imprecise and subjective than pure geometry. Such fields include cluster analysis in statistics, computer vision and pattern recognition, and the measurement of form and form-change in such areas as stereology and developmental biolo

  15. Pancreatic trauma: The role of computed tomography for guiding therapeutic approach

    Institute of Scientific and Technical Information of China (English)

    Marco; Moschetta; Michele; Telegrafo; Valeria; Malagnino; Laura; Mappa; Amato; A; Stabile; Ianora; Dario; Dabbicco; Antonio; Margari; Giuseppe; Angelelli

    2015-01-01

    AIM: To evaluate the role of computed tomography(CT) for diagnosing traumatic injuries of the pancreas and guiding the therapeutic approach.METHODS: CT exams of 6740 patients admitted to our Emergency Department between May 2005 and January 2013 for abdominal trauma were retrospectively evaluated. Patients were identified through a search of our electronic archive system by using such terms as "pancreatic injury", "pancreatic contusion", "pancreatic laceration", "peri-pancreatic fluid", "pancreatic active bleeding". All CT examinations were performed before and after the intravenous injection of contrast material using a 16-slice multidetector row computed tomography scanner. The data sets were retrospectively analyzed by two radiologists in consensus searching for specific signs of pancreatic injury(parenchymal fracture and laceration, focal or diffuse pancreatic enlargement/edema, pancreatic hematoma, active bleeding, fluid between splenic vein and pancreas) and non-specific signs(inflammatory changes in peri-pancreatic fat and mesentery, fluid surrounding the superior mesentericartery, thickening of the left anterior renal fascia, pancreatic ductal dilatation, acute pseudocyst formation/peri-pancreatic fluid collection, fluid in the anterior and posterior pararenal spaces, fluid in transverse mesocolon and lesser sac, hemorrhage into peri-pancreatic fat, mesocolon and mesentery, extraperitoneal fluid, intraperitoneal fluid).RESULTS: One hundred and thirty-six/Six thousand seven hundred and forty(2%) patients showed CT signs of pancreatic trauma. Eight/one hundred and thirty-six(6%) patients underwent surgical treatment and the pancreatic injures were confirmed in all cases. Only in 6/8 patients treated with surgical approach, pancreatic duct damage was suggested in the radiological reports and surgically confirmed in all cases. In 128/136(94%) patients who underwent non-operative treatment CT images showed pancreatic edema in 97 patients, hematoma in 31 patients

  16. Computational Approach to Diarylprolinol-Silyl Ethers in Aminocatalysis.

    Science.gov (United States)

    Halskov, Kim Søholm; Donslund, Bjarke S; Paz, Bruno Matos; Jørgensen, Karl Anker

    2016-05-17

    Asymmetric organocatalysis has witnessed a remarkable development since its "re-birth" in the beginning of the millenium. In this rapidly growing field, computational investigations have proven to be an important contribution for the elucidation of mechanisms and rationalizations of the stereochemical outcomes of many of the reaction concepts developed. The improved understanding of mechanistic details has facilitated the further advancement of the field. The diarylprolinol-silyl ethers have since their introduction been one of the most applied catalysts in asymmetric aminocatalysis due to their robustness and generality. Although aminocatalytic methods at first glance appear to follow relatively simple mechanistic principles, more comprehensive computational studies have shown that this notion in some cases is deceiving and that more complex pathways might be operating. In this Account, the application of density functional theory (DFT) and other computational methods on systems catalyzed by the diarylprolinol-silyl ethers is described. It will be illustrated how computational investigations have shed light on the structure and reactivity of important intermediates in aminocatalysis, such as enamines and iminium ions formed from aldehydes and α,β-unsaturated aldehydes, respectively. Enamine and iminium ion catalysis can be classified as HOMO-raising and LUMO-lowering activation modes. In these systems, the exclusive reactivity through one of the possible intermediates is often a requisite for achieving high stereoselectivity; therefore, the appreciation of subtle energy differences has been vital for the efficient development of new stereoselective reactions. The diarylprolinol-silyl ethers have also allowed for novel activation modes for unsaturated aldehydes, which have opened up avenues for the development of new remote functionalization reactions of poly-unsaturated carbonyl compounds via di-, tri-, and tetraenamine intermediates and vinylogous iminium ions

  17. Computational Approach to Diarylprolinol-Silyl Ethers in Aminocatalysis.

    Science.gov (United States)

    Halskov, Kim Søholm; Donslund, Bjarke S; Paz, Bruno Matos; Jørgensen, Karl Anker

    2016-05-17

    Asymmetric organocatalysis has witnessed a remarkable development since its "re-birth" in the beginning of the millenium. In this rapidly growing field, computational investigations have proven to be an important contribution for the elucidation of mechanisms and rationalizations of the stereochemical outcomes of many of the reaction concepts developed. The improved understanding of mechanistic details has facilitated the further advancement of the field. The diarylprolinol-silyl ethers have since their introduction been one of the most applied catalysts in asymmetric aminocatalysis due to their robustness and generality. Although aminocatalytic methods at first glance appear to follow relatively simple mechanistic principles, more comprehensive computational studies have shown that this notion in some cases is deceiving and that more complex pathways might be operating. In this Account, the application of density functional theory (DFT) and other computational methods on systems catalyzed by the diarylprolinol-silyl ethers is described. It will be illustrated how computational investigations have shed light on the structure and reactivity of important intermediates in aminocatalysis, such as enamines and iminium ions formed from aldehydes and α,β-unsaturated aldehydes, respectively. Enamine and iminium ion catalysis can be classified as HOMO-raising and LUMO-lowering activation modes. In these systems, the exclusive reactivity through one of the possible intermediates is often a requisite for achieving high stereoselectivity; therefore, the appreciation of subtle energy differences has been vital for the efficient development of new stereoselective reactions. The diarylprolinol-silyl ethers have also allowed for novel activation modes for unsaturated aldehydes, which have opened up avenues for the development of new remote functionalization reactions of poly-unsaturated carbonyl compounds via di-, tri-, and tetraenamine intermediates and vinylogous iminium ions

  18. Computational approach for calculating bound states in quantum field theory

    Science.gov (United States)

    Lv, Q. Z.; Norris, S.; Brennan, R.; Stefanovich, E.; Su, Q.; Grobe, R.

    2016-09-01

    We propose a nonperturbative approach to calculate bound-state energies and wave functions for quantum field theoretical models. It is based on the direct diagonalization of the corresponding quantum field theoretical Hamiltonian in an effectively discretized and truncated Hilbert space. We illustrate this approach for a Yukawa-like interaction between fermions and bosons in one spatial dimension and show where it agrees with the traditional method based on the potential picture and where it deviates due to recoil and radiative corrections. This method permits us also to obtain some insight into the spatial characteristics of the distribution of the fermions in the ground state, such as the bremsstrahlung-induced widening.

  19. Computer-assisted modeling: Contributions of computational approaches to elucidating macromolecular structure and function: Final report

    International Nuclear Information System (INIS)

    The Committee, asked to provide an assessment of computer-assisted modeling of molecular structure, has highlighted the signal successes and the significant limitations for a broad panoply of technologies and has projected plausible paths of development over the next decade. As with any assessment of such scope, differing opinions about present or future prospects were expressed. The conclusions and recommendations, however, represent a consensus of our views of the present status of computational efforts in this field

  20. Computational approaches to metabolic engineering utilizing systems biology and synthetic biology.

    Science.gov (United States)

    Fong, Stephen S

    2014-08-01

    Metabolic engineering modifies cellular function to address various biochemical applications. Underlying metabolic engineering efforts are a host of tools and knowledge that are integrated to enable successful outcomes. Concurrent development of computational and experimental tools has enabled different approaches to metabolic engineering. One approach is to leverage knowledge and computational tools to prospectively predict designs to achieve the desired outcome. An alternative approach is to utilize combinatorial experimental tools to empirically explore the range of cellular function and to screen for desired traits. This mini-review focuses on computational systems biology and synthetic biology tools that can be used in combination for prospective in silico strain design.

  1. Workflow Scheduling in Grid Computing Environment using a Hybrid GAACO Approach

    Science.gov (United States)

    Sathish, Kuppani; RamaMohan Reddy, A.

    2016-06-01

    In recent trends, grid computing is one of the emerging areas in computing platform which supports parallel and distributed environments. The main problem for grid computing is scheduling of workflows in terms of user specifications is a stimulating task and it also impacts the performance. This paper proposes a hybrid GAACO approach, which is a combination of Genetic Algorithm and Ant Colony Optimization Algorithm. The GAACO approach proposes different types of scheduling heuristics for the grid environment. The main objective of this approach is to satisfy all the defined constraints and user parameters.

  2. Synergy between experimental and computational approaches to homogeneous photoredox catalysis.

    Science.gov (United States)

    Demissie, Taye B; Hansen, Jørn H

    2016-07-01

    In this Frontiers article, we highlight how state-of-the-art density functional theory calculations can contribute to the field of homogeneous photoredox catalysis. We discuss challenges in the fields and potential solutions to be found at the interface between theory and experiment. The exciting opportunities and insights that can arise through such an interdisciplinary approach are highlighted.

  3. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  4. Computational approaches to identify functional genetic variants in cancer genomes

    DEFF Research Database (Denmark)

    Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris;

    2013-01-01

    The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor but only a minority of these drive tumor progression. We present the result of discu...... of discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype....

  5. Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach

    KAUST Repository

    Collier, Nathan

    2011-05-14

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.

  6. A complex systems approach to computational molecular biology

    Energy Technology Data Exchange (ETDEWEB)

    Lapedes, A. [Los Alamos National Lab., NM (United States)]|[Santa Fe Inst., NM (United States)

    1993-09-01

    We report on the containing research program at Santa Fe Institute that applies complex systems methodology to computational molecular biology. Two aspects are stressed here are the use of co-evolving adaptive neutral networks for determining predictable protein structure classifications, and the use of information theory to elucidate protein structure and function. A ``snapshot`` of the current state of research in these two topics is presented, representing the present state of two major research thrusts in the program of Genetic Data and Sequence Analysis at the Santa Fe Institute.

  7. Computational Approaches to Toll-Like Receptor 4 Modulation.

    Science.gov (United States)

    Billod, Jean-Marc; Lacetera, Alessandra; Guzmán-Caldentey, Joan; Martín-Santamaría, Sonsoles

    2016-01-01

    Toll-like receptor 4 (TLR4), along with its accessory protein myeloid differentiation factor 2 (MD-2), builds a heterodimeric complex that specifically recognizes lipopolysaccharides (LPS), which are present on the cell wall of Gram-negative bacteria, activating the innate immune response. Some TLR4 modulators are undergoing preclinical and clinical evaluation for the treatment of sepsis, inflammatory diseases, cancer and rheumatoid arthritis. Since the relatively recent elucidation of the X-ray crystallographic structure of the extracellular domain of TLR4, research around this fascinating receptor has risen to a new level, and thus, new perspectives have been opened. In particular, diverse computational techniques have been applied to decipher some of the basis at the atomic level regarding the mechanism of functioning and the ligand recognition processes involving the TLR4/MD-2 system at the atomic level. This review summarizes the reported molecular modeling and computational studies that have recently provided insights into the mechanism regulating the activation/inactivation of the TLR4/MD-2 system receptor and the key interactions modulating the molecular recognition process by agonist and antagonist ligands. These studies have contributed to the design and the discovery of novel small molecules with promising activity as TLR4 modulators. PMID:27483231

  8. Cognitive control in majority search: A computational modeling approach

    Directory of Open Access Journals (Sweden)

    Hongbin eWang

    2011-02-01

    Full Text Available Despite the importance of cognitive control in many cognitive tasks involving uncertainty, the computational mechanisms of cognitive control in response to uncertainty remain unclear. In this study, we develop biologically realistic neural network models to investigate the instantiation of cognitive control in a majority function task, where one determines the category to which the majority of items in a group belong. Two models are constructed, both of which include the same set of modules representing task-relevant brain functions and share the same model structure. However, with a critical change of a model parameter setting, the two models implement two different underlying algorithms: one for grouping search (where a subgroup of items are sampled and re-sampled until a congruent sample is found and the other for self-terminating search (where the items are scanned and counted one-by-one until the majority is decided. The two algorithms hold distinct implications for the involvement of cognitive control. The modeling results show that while both models are able to perform the task, the grouping search model fit the human data better than the self-terminating search model. An examination of the dynamics underlying model performance reveals how cognitive control might be instantiated in the brain via the V4-ACC-LPFC-IPS loop for computing the majority function.

  9. A Computational Differential Geometry Approach to Grid Generation

    CERN Document Server

    Liseikin, Vladimir D

    2007-01-01

    The process of breaking up a physical domain into smaller sub-domains, known as meshing, facilitates the numerical solution of partial differential equations used to simulate physical systems. This monograph gives a detailed treatment of applications of geometric methods to advanced grid technology. It focuses on and describes a comprehensive approach based on the numerical solution of inverted Beltramian and diffusion equations with respect to monitor metrics for generating both structured and unstructured grids in domains and on surfaces. In this second edition the author takes a more detailed and practice-oriented approach towards explaining how to implement the method by: Employing geometric and numerical analyses of monitor metrics as the basis for developing efficient tools for controlling grid properties. Describing new grid generation codes based on finite differences for generating both structured and unstructured surface and domain grids. Providing examples of applications of the codes to the genera...

  10. Heat sink material selection in electronic devices by computational approach

    Energy Technology Data Exchange (ETDEWEB)

    Geffroy, P.M. [S.P.C.T.S CNRS, ENSCI, Science des Procedes Ceramiques et de Traitements de Surface, 43 a 73 avenue Albert Thomas, 87065 Limoges (France); Mathias, J.D. [G.E.M.H., ENSCI, Groupe d' Etude des Materiaux Heterogenes, 43 a 73 avenue Albert Thomas, 87065 Limoges (France); Silvain, J.F. [I.C.M.C.B. CNRS, Institut de la Chimie et de la Matiere Condensee de Bordeaux, Universite de Bordeaux 1, 87 Avenue du Docteur Schweitzer, 33608 Pessac (France)

    2008-04-15

    Due to the increasing complexity and higher density of components in modern devices, reliability and lifetime are important issues in electronic packaging. Many material solutions have been suggested and tested for the reliability optimisation of electronic devices. This study presents methodical and numerical approaches for the selection of composite materials and compares the results of different actual solutions in terms of static and fatigue criteria. (Abstract Copyright [2008], Wiley Periodicals, Inc.)

  11. Computational approaches to protein inference in shotgun proteomics

    OpenAIRE

    Li Yong; Radivojac Predrag

    2012-01-01

    Abstract Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of protein...

  12. Scaling, growth and cyclicity in biology: a new computational approach

    OpenAIRE

    Gliozzi Antonio S; Delsanto Pier; Guiot Caterina

    2008-01-01

    Abstract Background The Phenomenological Universalities approach has been developed by P.P. Delsanto and collaborators during the past 2–3 years. It represents a new tool for the analysis of experimental datasets and cross-fertilization among different fields, from physics/engineering to medicine and social sciences. In fact, it allows similarities to be detected among datasets in totally different fields and acts upon them as a magnifying glass, enabling all the available information to be e...

  13. What is Intrinsic Motivation? A Typology of Computational Approaches

    OpenAIRE

    Pierre-Yves Oudeyer; Frederic Kaplan

    2007-01-01

    Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approac...

  14. Engineering approach to model and compute electric power markets settlements

    International Nuclear Information System (INIS)

    Back-office accounting settlement activities are an important part of market operations in Independent System Operator (ISO) organizations. A potential way to measure ISO market design correctness is to analyze how well market price signals create incentives or penalties for creating an efficient market to achieve market design goals. Market settlement rules are an important tool for implementing price signals which are fed back to participants via the settlement activities of the ISO. ISO's are currently faced with the challenge of high volumes of data resulting from the increasing size of markets and ever-changing market designs, as well as the growing complexity of wholesale energy settlement business rules. This paper analyzed the problem and presented a practical engineering solution using an approach based on mathematical formulation and modeling of large scale calculations. The paper also presented critical comments on various differences in settlement design approaches to electrical power market design, as well as further areas of development. The paper provided a brief introduction to the wholesale energy market settlement systems and discussed problem formulation. An actual settlement implementation framework and discussion of the results and conclusions were also presented. It was concluded that a proper engineering approach to this domain can yield satisfying results by formalizing wholesale energy settlements. Significant improvements were observed in the initial preparation phase, scoping and effort estimation, implementation and testing. 5 refs., 2 figs

  15. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Micheal J.; Partridge, L. Donald [University of New Mexico, Albuquerque, NM; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing>10%5E16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m%5E3 and 3MW, giving the brain a 10%5E12 advantage in operations/s/W/cm%5E3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  16. Multiplexing Genetic and Nucleosome Positioning Codes: A Computational Approach.

    Science.gov (United States)

    Eslami-Mossallam, Behrouz; Schram, Raoul D; Tompitak, Marco; van Noort, John; Schiessel, Helmut

    2016-01-01

    Eukaryotic DNA is strongly bent inside fundamental packaging units: the nucleosomes. It is known that their positions are strongly influenced by the mechanical properties of the underlying DNA sequence. Here we discuss the possibility that these mechanical properties and the concomitant nucleosome positions are not just a side product of the given DNA sequence, e.g. that of the genes, but that a mechanical evolution of DNA molecules might have taken place. We first demonstrate the possibility of multiplexing classical and mechanical genetic information using a computational nucleosome model. In a second step we give evidence for genome-wide multiplexing in Saccharomyces cerevisiae and Schizosacharomyces pombe. This suggests that the exact positions of nucleosomes play crucial roles in chromatin function. PMID:27272176

  17. A Computational Approach to Politeness with Application to Social Factors

    CERN Document Server

    Danescu-Niculescu-Mizil, Cristian; Jurafsky, Dan; Leskovec, Jure; Potts, Christopher

    2013-01-01

    We propose a computational framework for identifying linguistic aspects of politeness. Our starting point is a new corpus of requests annotated for politeness, which we use to evaluate aspects of politeness theory and to uncover new interactions between politeness markers and context. These findings guide our construction of a classifier with domain-independent lexical and syntactic features operationalizing key components of politeness theory, such as indirection, deference, impersonalization and modality. Our classifier achieves close to human performance and is effective across domains. We use our framework to study the relationship between politeness and social power, showing that polite Wikipedia editors are more likely to achieve high status through elections, but, once elevated, they become less polite. We see a similar negative correlation between politeness and power on Stack Exchange, where users at the top of the reputation scale are less polite than those at the bottom. Finally, we apply our class...

  18. A computational approach to the twin paradox in curved spacetime

    CERN Document Server

    Fung, Kenneth K H; Lewis, Geraint F; Wu, Xiaofeng

    2016-01-01

    Despite being a major component in the teaching of special relativity, the twin `paradox' is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.

  19. A computational approach to the twin paradox in curved spacetime

    Science.gov (United States)

    Fung, Kenneth K. H.; Clark, Hamish A.; Lewis, Geraint F.; Wu, Xiaofeng

    2016-09-01

    Despite being a major component in the teaching of special relativity, the twin ‘paradox’ is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.

  20. Computational approaches for efficiently modelling of small atmospheric clusters

    DEFF Research Database (Denmark)

    Elm, Jonas; Mikkelsen, Kurt Valentin

    2014-01-01

    Utilizing a comprehensive test set of 205 clusters of atmospheric relevance, we investigate how different DFT functionals (M06-2X, PW91, ωB97X-D) and basis sets (6-311++G(3df,3pd), 6-31++G(d,p), 6-31+G(d)) affect the thermal contribution to the Gibbs free energy and single point energy. Reducing...... the basis set used in the geometry and frequency calculation from 6-311++G(3df,3pd) → 6-31++G(d,p) implies a significant speed-up in computational time and only leads to small errors in the thermal contribution to the Gibbs free energy and subsequent coupled cluster single point energy calculation....

  1. MADLVF: An Energy Efficient Resource Utilization Approach for Cloud Computing

    Directory of Open Access Journals (Sweden)

    J.K. Verma

    2014-06-01

    Full Text Available Last few decades have remained the witness of steeper growth in demand for higher computational power. It is merely due to shift from the industrial age to Information and Communication Technology (ICT age which was marginally the result of digital revolution. Such trend in demand caused establishment of large-scale data centers situated at geographically apart locations. These large-scale data centers consume a large amount of electrical energy which results into very high operating cost and large amount of carbon dioxide (CO2 emission due to resource underutilization. We propose MADLVF algorithm to overcome the problems such as resource underutilization, high energy consumption, and large CO2 emissions. Further, we present a comparative study between the proposed algorithm and MADRS algorithms showing proposed methodology outperforms over the existing one in terms of energy consumption and the number of VM migrations.

  2. Open-ended approaches to science assessment using computers

    Science.gov (United States)

    Singley, Mark K.; Taft, Hessy L.

    1995-03-01

    We discuss the potential role of technology in evaluating learning outcomes in large-scale, widespread science assessments of the kind typically done at ETS, such as the GRE, or the College Board SAT II Subject Tests. We describe the current state-of-the-art in this area, as well as briefly outline the history of technology in large-scale science assessment and ponder possibilities for the future. We present examples from our own work in the domain of chemistry, in which we are designing problem solving interfaces and scoring programs for stoichiometric and other kinds of quantitative problem solving. We also present a new scientific reasoning item type that we are prototyping on the computer. It is our view that the technological infrastructure for large-scale constructed response science assessment is well on its way to being available, although many technical and practical hurdles remain.

  3. Photonic reservoir computing: a new approach to optical information processing

    Science.gov (United States)

    Vandoorne, Kristof; Fiers, Martin; Verstraeten, David; Schrauwen, Benjamin; Dambre, Joni; Bienstman, Peter

    2010-06-01

    Despite ever increasing computational power, recognition and classification problems remain challenging to solve. Recently, advances have been made by the introduction of the new concept of reservoir computing. This is a methodology coming from the field of machine learning and neural networks that has been successfully used in several pattern classification problems, like speech and image recognition. Thus far, most implementations have been in software, limiting their speed and power efficiency. Photonics could be an excellent platform for a hardware implementation of this concept because of its inherent parallelism and unique nonlinear behaviour. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed. We propose using a network of coupled Semiconductor Optical Amplifiers (SOA) and show in simulation that it could be used as a reservoir by comparing it to conventional software implementations using a benchmark speech recognition task. In spite of the differences with classical reservoir models, the performance of our photonic reservoir is comparable to that of conventional implementations and sometimes slightly better. As our implementation uses coherent light for information processing, we find that phase tuning is crucial to obtain high performance. In parallel we investigate the use of a network of photonic crystal cavities. The coupled mode theory (CMT) is used to investigate these resonators. A new framework is designed to model networks of resonators and SOAs. The same network topologies are used, but feedback is added to control the internal dynamics of the system. By adjusting the readout weights of the network in a controlled manner, we can generate arbitrary periodic patterns.

  4. A computationally efficient approach for template matching-based image registration

    Indian Academy of Sciences (India)

    Vilas H Gaidhane; Yogesh V Hote; Vijander Singh

    2014-04-01

    Image registration using template matching is an important step in image processing. In this paper, a simple, robust and computationally efficient approach is presented. The proposed approach is based on the properties of a normalized covariance matrix. The main advantage of the proposed approach is that the image matching can be achieved without calculating eigenvalues and eigenvectors of a covariance matrix, hence reduces the computational complexity. The experimental results show that the proposed approach performs better in the presence of various noises and rigid geometric transformations.

  5. Soft computing approach to pattern classification and object recognition a unified concept

    CERN Document Server

    Ray, Kumar S

    2012-01-01

    Soft Computing Approach to Pattern Classification and Object Recognition establishes an innovative, unified approach to supervised pattern classification and model-based occluded object recognition. The book also surveys various soft computing tools, fuzzy relational calculus (FRC), genetic algorithm (GA) and multilayer perceptron (MLP) to provide a strong foundation for the reader. The supervised approach to pattern classification and model-based approach to occluded object recognition are treated in one framework , one based on either a conventional interpretation or a new interpretation of

  6. Method in computer ethics: Towards a multi-level interdisciplinary approach

    NARCIS (Netherlands)

    Brey, Philip

    2000-01-01

    This essay considers methodological aspects ofcomputer ethics and argues for a multi-levelinterdisciplinary approach with a central role forwhat is called disclosive computer ethics. Disclosivecomputer ethics is concerned with the moraldeciphering of embedded values and norms in computersystems, app

  7. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    Science.gov (United States)

    Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando

    2015-07-01

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.

  8. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    Energy Technology Data Exchange (ETDEWEB)

    Chacón, Enrique, E-mail: echacon@icmm.csic.es [Instituto de Ciencia de Materiales de Madrid, CSIC, 28049 Madrid, Spain and Instituto de Ciencia de Materiales Nicolás Cabrera, Universidad Autónoma de Madrid, Madrid 28049 (Spain); Tarazona, Pedro, E-mail: pedro.tarazona@uam.es [Departamento de Física Teórica de la Materia Condensada, Condensed Matter Physics Center (IFIMAC), and Instituto de Ciencia de Materiales Nicolás Cabrera, Universidad Autónoma de Madrid, Madrid 28049 (Spain); Bresme, Fernando, E-mail: f.bresme@imperial.ac.uk [Department of Chemistry, Imperial College London, SW7 2AZ London (United Kingdom)

    2015-07-21

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke’s law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.

  9. Data analysis of asymmetric structures advanced approaches in computational statistics

    CERN Document Server

    Saito, Takayuki

    2004-01-01

    Data Analysis of Asymmetric Structures provides a comprehensive presentation of a variety of models and theories for the analysis of asymmetry and its applications and provides a wealth of new approaches in every section. It meets both the practical and theoretical needs of research professionals across a wide range of disciplines and  considers data analysis in fields such as psychology, sociology, social science, ecology, and marketing. In seven comprehensive chapters this guide details theories, methods, and models for the analysis of asymmetric structures in a variety of disciplines and presents future opportunities and challenges affecting research developments and business applications.

  10. Computational s-block thermochemistry with the correlation consistent composite approach.

    Science.gov (United States)

    DeYonker, Nathan J; Ho, Dustin S; Wilson, Angela K; Cundari, Thomas R

    2007-10-25

    The correlation consistent composite approach (ccCA) is a model chemistry that has been shown to accurately compute gas-phase enthalpies of formation for alkali and alkaline earth metal oxides and hydroxides (Ho, D. S.; DeYonker, N. J.; Wilson, A. K.; Cundari, T. R. J. Phys. Chem. A 2006, 110, 9767). The ccCA results contrast to more widely used model chemistries where calculated enthalpies of formation for such species can be in error by up to 90 kcal mol-1. In this study, we have applied ccCA to a more general set of 42 s-block molecules and compared the ccCA DeltaHf values to values obtained using the G3 and G3B model chemistries. Included in this training set are water complexes such as Na(H2O)n+ where n = 1 - 4, dimers and trimers of ionic compounds such as (LiCl)2 and (LiCl)3, and the largest ccCA computation to date: Be(acac)2, BeC10H14O4. Problems with the G3 model chemistries seem to be isolated to metal-oxygen bonded systems and Be-containing systems, as G3 and G3B still perform quite well with a 2.7 and 2.6 kcal mol-1 mean absolute deviation (MAD), respectively, for gas-phase enthalpies of formation. The MAD of the ccCA is only 2.2 kcal mol-1 for enthalpies of formation (DeltaHf) for all compounds studied herein. While this MAD is roughly double that found for a ccCA study of >350 main group (i.e., p-block) compounds, it is commensurate with typical experimental uncertainties for s-block complexes. Some molecules where G3/G3B and ccCA computed DeltaHf values deviate significantly from experiment, such as (LiCl)3, NaCN, and MgF, are inviting candidates for new experimental and high-level theoretical studies. PMID:17914764

  11. Perturbation approach for nuclear magnetic resonance solid-state quantum computation

    Directory of Open Access Journals (Sweden)

    G. P. Berman

    2003-01-01

    Full Text Available A dynamics of a nuclear-spin quantum computer with a large number (L=1000 of qubits is considered using a perturbation approach. Small parameters are introduced and used to compute the error in an implementation of an entanglement between remote qubits, using a sequence of radio-frequency pulses. The error is computed up to the different orders of the perturbation theory and tested using exact numerical solution.

  12. A simplified approach to compute distribution matrices for the mapping method

    NARCIS (Netherlands)

    Singh, M.K.; Galaktionov, O.S.; Meijer, H.E.H.; Anderson, P.D.

    2009-01-01

    The mapping method has proven its efficiency as an analysis and optimization tool for mixing in many different flow devices. In this paper, we present a new approach to compute the coefficients of the distribution matrix, which is, both in terms of computational speed and complexity, more easy to im

  13. A Two Layer Approach to the Computability and Complexity of Real Functions

    DEFF Research Database (Denmark)

    Lambov, Branimir Zdravkov

    2003-01-01

    We present a new model for computability and complexity of real functions together with an implementation that it based on it. The model uses a two-layer approach in which low-type basic objects perform the computation of a real function, but, whenever needed, can be complemented with higher type...

  14. Granular computing and decision-making interactive and iterative approaches

    CERN Document Server

    Chen, Shyi-Ming

    2015-01-01

    This volume is devoted to interactive and iterative processes of decision-making– I2 Fuzzy Decision Making, in brief. Decision-making is inherently interactive. Fuzzy sets help realize human-machine communication in an efficient way by facilitating a two-way interaction in a friendly and transparent manner. Human-centric interaction is of paramount relevance as a leading guiding design principle of decision support systems.   The volume provides the reader with an updated and in-depth material on the conceptually appealing and practically sound methodology and practice of I2 Fuzzy Decision Making. The book engages a wealth of methods of fuzzy sets and Granular Computing, brings new concepts, architectures and practice of fuzzy decision-making providing the reader with various application studies.   The book is aimed at a broad audience of researchers and practitioners in numerous disciplines in which decision-making processes play a pivotal role and serve as a vehicle to produce solutions to existing prob...

  15. Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach

    Directory of Open Access Journals (Sweden)

    Seth A. Herd

    2013-01-01

    Full Text Available We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC and basal ganglia (BG cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”. The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area.

  16. A computational toy model for shallow landslides: Molecular Dynamics approach

    CERN Document Server

    Martelloni, Gianluca; Massaro, Emanuele

    2012-01-01

    The aim of this paper is to propose a 2D computational algorithm for modeling of the trigger and the propagation of shallow landslides caused by rainfall. We used a Molecular Dynamics (MD) inspired model, similar to discrete element method (DEM), that is suitable to model granular material and to observe the trajectory of single particle, so to identify its dynamical properties. We consider that the triggering of shallow landslides is caused by the decrease of the static friction along the sliding surface due to water infiltration by rainfall. Thence the triggering is caused by two following conditions: (a) a threshold speed of the particles and (b) a condition on the static friction, between particles and slope surface, based on the Mohr-Coulomb failure criterion. The latter static condition is used in the geotechnical model to estimate the possibility of landslide triggering. Finally the interaction force between particles is defined trough a potential that, in the absence of experimental data, we have mode...

  17. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Cai, C. [CEA, LIST, 91191 Gif-sur-Yvette, France and CNRS, SUPELEC, UNIV PARIS SUD, L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Rodet, T.; Mohammad-Djafari, A. [CNRS, SUPELEC, UNIV PARIS SUD, L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Legoupil, S. [CEA, LIST, 91191 Gif-sur-Yvette (France)

    2013-11-15

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  18. Partial order approach to compute shortest paths in multimodal networks

    CERN Document Server

    Ensor, Andrew

    2011-01-01

    Many networked systems involve multiple modes of transport. Such systems are called multimodal, and examples include logistic networks, biomedical phenomena, manufacturing process and telecommunication networks. Existing techniques for determining optimal paths in multimodal networks have either required heuristics or else application-specific constraints to obtain tractable problems, removing the multimodal traits of the network during analysis. In this paper weighted coloured--edge graphs are introduced to model multimodal networks, where colours represent the modes of transportation. Optimal paths are selected using a partial order that compares the weights in each colour, resulting in a Pareto optimal set of shortest paths. This approach is shown to be tractable through experimental analyses for random and real multimodal networks without the need to apply heuristics or constraints.

  19. Fault-tolerant quantum computation -- a dynamical systems approach

    CERN Document Server

    Fern, J; Simic, S; Sastry, S; Fern, Jesse; Kempe, Julia; Simic, Slobodan; Sastry, Shankar

    2004-01-01

    We apply a dynamical systems approach to concatenation of quantum error correcting codes, extending and generalizing the results of Rahn et al. [8] to both diagonal and nondiagonal channels. Our point of view is global: instead of focusing on particular types of noise channels, we study the geometry of the coding map as a discrete-time dynamical system on the entire space of noise channels. In the case of diagonal channels, we show that any code with distance at least three corrects (in the infinite concatenation limit) an open set of errors. For CSS codes, we give a more precise characterization of that set. We show how to incorporate noise in the gates, thus completing the framework. We derive some general bounds for noise channels, which allows us to analyze several codes in detail.

  20. Systematic Approach to Computational Design of Gene Regulatory Networks with Information Processing Capabilities.

    Science.gov (United States)

    Moskon, Miha; Mraz, Miha

    2014-01-01

    We present several measures that can be used in de novo computational design of biological systems with information processing capabilities. Their main purpose is to objectively evaluate the behavior and identify the biological information processing structures with the best dynamical properties. They can be used to define constraints that allow one to simplify the design of more complex biological systems. These measures can be applied to existent computational design approaches in synthetic biology, i.e., rational and automatic design approaches. We demonstrate their use on a) the computational models of several basic information processing structures implemented with gene regulatory networks and b) on a modular design of a synchronous toggle switch.

  1. Data science in R a case studies approach to computational reasoning and problem solving

    CERN Document Server

    Nolan, Deborah

    2015-01-01

    Effectively Access, Transform, Manipulate, Visualize, and Reason about Data and ComputationData Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions. The book's collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including: Non-standar

  2. Analyses of Physcomitrella patens Ankyrin Repeat Proteins by Computational Approach

    Science.gov (United States)

    Mahmood, Niaz; Tamanna, Nahid

    2016-01-01

    Ankyrin (ANK) repeat containing proteins are evolutionary conserved and have functions in crucial cellular processes like cell cycle regulation and signal transduction. In this study, through an entirely in silico approach using the first release of the moss genome annotation, we found that at least 54 ANK proteins are present in P. patens. Based on their differential domain composition, the identified ANK proteins were classified into nine subfamilies. Comparative analysis of the different subfamilies of ANK proteins revealed that P. patens contains almost all the known subgroups of ANK proteins found in the other angiosperm species except for the ones having the TPR domain. Phylogenetic analysis using full length protein sequences supported the subfamily classification where the members of the same subfamily almost always clustered together. Synonymous divergence (dS) and nonsynonymous divergence (dN) ratios showed positive selection for the ANK genes of P. patens which probably helped them to attain significant functional diversity during the course of evolution. Taken together, the data provided here can provide useful insights for future functional studies of the proteins from this superfamily as well as comparative studies of ANK proteins.

  3. An approach to computing marginal land use change carbon intensities for bioenergy in policy applications

    International Nuclear Information System (INIS)

    Accurately characterizing the emissions implications of bioenergy is increasingly important to the design of regional and global greenhouse gas mitigation policies. Market-based policies, in particular, often use information about carbon intensity to adjust relative deployment incentives for different energy sources. However, the carbon intensity of bioenergy is difficult to quantify because carbon emissions can occur when land use changes to expand production of bioenergy crops rather than simply when the fuel is consumed as for fossil fuels. Using a long-term, integrated assessment model, this paper develops an approach for computing the carbon intensity of bioenergy production that isolates the marginal impact of increasing production of a specific bioenergy crop in a specific region, taking into account economic competition among land uses. We explore several factors that affect emissions intensity and explain these results in the context of previous studies that use different approaches. Among the factors explored, our results suggest that the carbon intensity of bioenergy production from land use change (LUC) differs by a factor of two depending on the region in which the bioenergy crop is grown in the United States. Assumptions about international land use policies (such as those related to forest protection) and crop yields also significantly impact carbon intensity. Finally, we develop and demonstrate a generalized method for considering the varying time profile of LUC emissions from bioenergy production, taking into account the time path of future carbon prices, the discount rate and the time horizon. When evaluated in the context of power sector applications, we found electricity from bioenergy crops to be less carbon-intensive than conventional coal-fired electricity generation and often less carbon-intensive than natural-gas fired generation. - Highlights: • Modeling methodology for assessing land use change emissions from bioenergy • Use GCAM

  4. (Validation of) computational fluid dynamics modeling approach to evaluate VSC-17 dry storage cask thermal designs

    International Nuclear Information System (INIS)

    This paper presents results from a numerical analysis of the thermal evaluation of a Ventilated Concrete Storage Cask VSC-17 system. Three-dimensional simulations are performed for the VSC-17 system, and the results are compared to experimental data. The VSC-17 is a concrete-shielded spent nuclear fuel (SNF) cask system designed to contain 17 pressurized water reactor (PWR) fuel assemblies for storage and transportation. The system consists of a ventilated concrete cask (VCC) and a multi-assembly sealed basket (MSB). The VCC is a concrete cylindrical vessel, fabricated as a single piece and fitted with a flat plate at the bottom. The concrete cask provides structural support, shielding, and natural convection cooling for the MSB. The MSB has an outer steel shell and an inner fuel guide sleeve assembly that holds canisters containing spent fuel rods. Cooling airflow inside the concrete cask is driven by natural convection. Heat transfer in the cask is a complicated process because of the inherent complexity of the geometry and the fixed and natural convection induced by the radioactive decay process. Other factors that contribute to the overall heat transfer include the heat generation by the spent fuel, the thermal boundary condition, the filling medium within the MSB, and the vertical or horizontal orientation of the cask. Proper thermal analysis of dry storage casks is important for accurate estimation of the peak fuel temperature and peak cladding temperature (PCT). Proper estimation of PCT ensures the integrity of cladding and is important for safety evaluation of independent spent fuel storage installations. Accurate estimation of the peak fuel temperature and peak cladding temperature ensures the integrity of the cladding. The spent nuclear fuel may be exposed to air and oxidize if the cladding is damaged and thus increase the potential for release of radioactivity. In the current analysis, numerical simulations are carried out using the computational fluid

  5. Effects of artificial gravity on the cardiovascular system: Computational approach

    Science.gov (United States)

    Diaz Artiles, Ana; Heldt, Thomas; Young, Laurence R.

    2016-09-01

    steady-state cardiovascular behavior during sustained artificial gravity and exercise. Further validation of the model was performed using experimental data from the combined exercise and artificial gravity experiments conducted on the MIT CRC, and these results will be presented separately in future publications. This unique computational framework can be used to simulate a variety of centrifuge configuration and exercise intensities to improve understanding and inform decisions about future implementation of artificial gravity in space.

  6. A New Approach to Practical Active-Secure Two-Party Computation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio;

    2012-01-01

    We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao’s garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...

  7. A Knowledge Engineering Approach to Developing Educational Computer Games for Improving Students' Differentiating Knowledge

    Science.gov (United States)

    Hwang, Gwo-Jen; Sung, Han-Yu; Hung, Chun-Ming; Yang, Li-Hsueh; Huang, Iwen

    2013-01-01

    Educational computer games have been recognized as being a promising approach for motivating students to learn. Nevertheless, previous studies have shown that without proper learning strategies or supportive models, the learning achievement of students might not be as good as expected. In this study, a knowledge engineering approach is proposed…

  8. An Augmented Incomplete Factorization Approach for Computing the Schur Complement in Stochastic Optimization

    KAUST Repository

    Petra, Cosmin G.

    2014-01-01

    We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.

  9. Efficient all-optical quantum computing based on a hybrid approach

    CERN Document Server

    Lee, Seung-Woo

    2011-01-01

    Quantum computers are expected to offer phenomenal increases of computational power. In spite of many proposals based on various physical systems, scalable quantum computation in a fault-tolerant manner is still beyond current technology. Optical models have some prominent advantages such as relatively quick operation time compared to decoherence time. However, massive resource requirements and the gap between the fault tolerance limit and the realistic error rate should be significantly reduced. Here, we develop a novel approach with all-optical hybrid qubits devised to combine advantages of well-known previous approaches. It enables one to efficiently perform universal gate operations in a simple and near-deterministic way using all-optical hybrid entanglement as off-line resources. Remarkably, our approach outperforms the previous ones when considering both the resource requirements and fault tolerance limits. Our work paves an efficient way for the optical realization of scalable quantum computation.

  10. A computational approach to the functional screening of genomes.

    Directory of Open Access Journals (Sweden)

    Davide Chiarugi

    2007-09-01

    Full Text Available Comparative genomics usually involves managing the functional aspects of genomes, by simply comparing gene-by-gene functions. Following this approach, Mushegian and Koonin proposed a hypothetical minimal genome, Minimal Gene Set (MGS, aiming for a possible oldest ancestor genome. They obtained MGS by comparing the genomes of two simple bacteria and eliminating duplicated or functionally identical genes. The authors raised the fundamental question of whether a hypothetical organism possessing MGS is able to live or not. We attacked this viability problem specifying in silico the metabolic pathways of the MGS-based prokaryote. We then performed a dynamic simulation of cellular metabolic activities in order to check whether the MGS-prokaryote reaches some equilibrium state and produces the necessary biomass. We assumed these two conditions to be necessary for a living organism. Our simulations clearly show that the MGS does not express an organism that is able to live. We then iteratively proceeded with functional replacements in order to obtain a genome composition that gives rise to equilibrium. We ruled out 76 of the original 254 genes in the MGS, because they resulted in duplication from a functional point of view. We also added seven genes not present in the MGS. These genes encode for enzymes involved in critical nodes of the metabolic network. These modifications led to a genome composed of 187 elements expressing a virtually living organism, Virtual Cell (ViCe, that exhibits homeostatic capabilities and produces biomass. Moreover, the steady-state distribution of the concentrations of virtual metabolites that resulted was similar to that experimentally measured in bacteria. We conclude then that ViCe is able to "live in silico."

  11. Computational model of precision grip in Parkinson’s disease: A Utility based approach

    Directory of Open Access Journals (Sweden)

    Ankur eGupta

    2013-12-01

    Full Text Available We propose a computational model of Precision Grip (PG performance in normal subjects and Parkinson’s Disease (PD patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Fellows et al 1998; Ingvarsson et al 1997. Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: 1 the sensory-motor loop component, and 2 the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the precision grip results from normal and PD patients accurately (Fellows et. al. 1998; Ingvarsson et. al. 1997. To our knowledge the model is the first model of precision grip in PD conditions.

  12. Computational model of precision grip in Parkinson's disease: a utility based approach.

    Science.gov (United States)

    Gupta, Ankur; Balasubramani, Pragathi P; Chakravarthy, V Srinivasa

    2013-01-01

    We propose a computational model of Precision Grip (PG) performance in normal subjects and Parkinson's Disease (PD) patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Ingvarsson et al., 1997; Fellows et al., 1998). Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: (1) the sensory-motor loop component, and (2) the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the PG results from normal and PD patients accurately (Ingvarsson et al., 1997; Fellows et al., 1998). To our knowledge the model is the first model of PG in PD conditions. PMID:24348373

  13. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  14. Discovery Learning and the Computational Experiment in Higher Mathematics and Science Education: A Combined Approach

    Directory of Open Access Journals (Sweden)

    Athanasios Kyriazis

    2009-12-01

    Full Text Available In this article we present our research for Discovery learning in relation to the computational experiment for the instruction of Mathematics and Science university courses, using the approach of the computational experiment through electronic worksheets. The approach is based on the principles of Discovery learning expanded with the principles of constructivist, socio–cultural and adult learning theories, the concept of computer based cognitive tools and the aspects on which the computational experiment is founded. Applications are presented using the software Mathematica and electronic worksheets for selected domains of Physics. We also present a case study, concerning the application of the computational experiment through electronic worksheets in the School of Pedagogical and Technological Education (ASPETE during the spring semester of the academic year 2008–2009. Research results concerning the impact of the above mentioned issues on students’ beliefs and learning performance are presented.

  15. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Fillippi, Anthony [Texas A& M University; Bhaduri, Budhendra L [ORNL; Naughton, III, Thomas J [ORNL; King, Amy L [ORNL; Scott, Stephen L [ORNL; Guneralp, Inci [Texas A& M University

    2012-01-01

    For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent- and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required {approx}100 hours until termination, whereas a parallel approach required only {approx}2.5 hours (42 compute nodes) - a 40x speed-up. Tools developed for this parallel execution are discussed.

  16. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Filippi, Anthony M [ORNL; Bhaduri, Budhendra L [ORNL; Naughton, III, Thomas J [ORNL; King, Amy L [ORNL; Scott, Stephen L [ORNL; Guneralp, Inci [Texas A& M University

    2012-01-01

    Abstract For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent-and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required ~100 hours until termination, whereas a parallel approach required only ~2.5 hours (42 compute nodes) a 40x speed-up. Tools developed for this parallel execution are discussed.

  17. Rosiglitazone: can meta-analysis accurately estimate excess cardiovascular risk given the available data? Re-analysis of randomized trials using various methodologic approaches

    Directory of Open Access Journals (Sweden)

    Friedrich Jan O

    2009-01-01

    , although far from statistically significant. Conclusion We have shown that alternative reasonable methodological approaches to the rosiglitazone meta-analysis can yield increased or decreased risks that are either statistically significant or not significant at the p = 0.05 level for both myocardial infarction and cardiovascular death. Completion of ongoing trials may help to generate more accurate estimates of rosiglitazone's effect on cardiovascular outcomes. However, given that almost all point estimates suggest harm rather than benefit and the availability of alternative agents, the use of rosiglitazone may greatly decline prior to more definitive safety data being generated.

  18. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  19. An Exploration of Hyperion Hyperspectral Imagery Combined with Different Supervised Classification Approaches Towards Obtaining More Accurate Land Use/Cover Cartography

    Science.gov (United States)

    Igityan, Nune

    2014-05-01

    Land use and land cover (LULC) constitutes a key variable of the Earth's system that has in general shown a close correlation with human activities and the physical environment. Describing the pattern and the spatial distribution of LULC is traditionally based on remote sensing data analysis and, evidently, one of the most commonly techniques applied has been image classification. The main objective of the present study has been to evaluate the combined use of Hyperion hyperspectral imagery with a range of supervised classification algorithms widely available today for discriminating LULC classes in a typical Mediterranean setting. Accuracy assessment of the derived thematic maps was based on the analysis of the classification confusion matrix statistics computed for each classification map, using for consistency the same set of validation points. Those were selected on the basis of photo-interpretation of high resolution aerial imagery and of panchromatic imagery available for the studied region at the time of the Hyperion overpass. Results indicated close classification accuracy between the different classifiers with the SVMs outperforming the other classification approaches. The higher classification accuracy by SVMs was attributed principally to the ability of this classifier to identify an optimal separating hyperplane for classes' separation which allows a low generalisation error, thus producing the best possible classes' separation. Although all classifiers produced close results, SVMs generally appeared most useful in describing the spatial distribution and the cover density of each land cover category. All in all, this study demonstrated that, provided that a Hyperion hyperspectral imagery can be made available at regular time intervals over a given region, when combined with SVMs classifiers, can potentially enable a wider approach in land use/cover mapping. This can be of particular importance, especially for regions like in the Mediterranean basin

  20. Teleportation-based quantum computation, extended Temperley-Lieb diagrammatical approach and Yang-Baxter equation

    Science.gov (United States)

    Zhang, Yong; Zhang, Kun; Pang, Jinglong

    2016-01-01

    This paper focuses on the study of topological features in teleportation-based quantum computation and aims at presenting a detailed review on teleportation-based quantum computation (Gottesman and Chuang in Nature 402: 390, 1999). In the extended Temperley-Lieb diagrammatical approach, we clearly show that such topological features bring about the fault-tolerant construction of both universal quantum gates and four-partite entangled states more intuitive and simpler. Furthermore, we describe the Yang-Baxter gate by its extended Temperley-Lieb configuration and then study teleportation-based quantum circuit models using the Yang-Baxter gate. Moreover, we discuss the relationship between the extended Temperley-Lieb diagrammatical approach and the Yang-Baxter gate approach. With these research results, we propose a worthwhile subject, the extended Temperley-Lieb diagrammatical approach, for physicists in quantum information and quantum computation.

  1. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications

    Science.gov (United States)

    Merced-Grafals, Emmanuelle J.; Dávila, Noraica; Ge, Ning; Williams, R. Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 106 cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  2. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications

    Science.gov (United States)

    Merced-Grafals, Emmanuelle J.; Dávila, Noraica; Ge, Ning; Williams, R. Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 106 cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  3. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications.

    Science.gov (United States)

    Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  4. Use of Integrated Computational Approaches in the Search for New Therapeutic Agents.

    Science.gov (United States)

    Persico, Marco; Di Dato, Antonio; Orteca, Nausicaa; Cimino, Paola; Novellino, Ettore; Fattorusso, Caterina

    2016-09-01

    Computer-aided drug discovery plays a strategic role in the development of new potential therapeutic agents. Nevertheless, the modeling of biological systems still represents a challenge for computational chemists and at present a single computational method able to face such challenge is not available. This prompted us, as computational medicinal chemists, to develop in-house methodologies by mixing various bioinformatics and computational tools. Importantly, thanks to multi-disciplinary collaborations, our computational studies were integrated and validated by experimental data in an iterative process. In this review, we describe some recent applications of such integrated approaches and how they were successfully applied in i) the search of new allosteric inhibitors of protein-protein interactions and ii) the development of new redox-active antimalarials from natural leads.

  5. Fully Integrated Approach to Compute Vibrationally Resolved Optical Spectra: From Small Molecules to Macrosystems.

    Science.gov (United States)

    Barone, Vincenzo; Bloino, Julien; Biczysko, Malgorzata; Santoro, Fabrizio

    2009-03-10

    A general and effective time-independent approach to compute vibrationally resolved electronic spectra from first principles has been integrated into the Gaussian computational chemistry package. This computational tool offers a simple and easy-to-use way to compute theoretical spectra starting from geometry optimization and frequency calculations for each electronic state. It is shown that in such a way it is straightforward to combine calculation of Franck-Condon integrals with any electronic computational model. The given examples illustrate the calculation of absorption and emission spectra, all in the UV-vis region, of various systems from small molecules to large ones, in gas as well as in condensed phases. The computational models applied range from fully quantum mechanical descriptions to discrete/continuum quantum mechanical/molecular mechanical/polarizable continuum models. PMID:26610221

  6. Automated Extraction of Cranial Landmarks from Computed Tomography Data using a Combined Method of Knowledge and Pattern Based Approaches

    Directory of Open Access Journals (Sweden)

    Roshan N. RAJAPAKSE

    2016-03-01

    Full Text Available Accurate identification of anatomical structures from medical imaging data is a significant and critical function in the medical domain. Past studies in this context have mainly utilized two main approaches, the knowledge and learning methodologies based methods. Further, most of previous reported studies have focused on identification of landmarks from lateral X-ray Computed Tomography (CT data, particularly in the field of orthodontics. However, this study focused on extracting cranial landmarks from large sets of cross sectional CT slices using a combined method of the two aforementioned approaches. The proposed method of this study is centered mainly on template data sets, which were created using the actual contour patterns extracted from CT cases for each of the landmarks in consideration. Firstly, these templates were used to devise rules which are a characteristic of the knowledge based method. Secondly, the same template sets were employed to perform template matching related to the learning methodologies approach. The proposed method was tested on two landmarks, the Dorsum sellae and the Pterygoid plate, using CT cases of 5 subjects. The results indicate that, out of the 10 tests, the output images were within the expected range (desired accuracy in 7 instances and acceptable range (near accuracy for 2 instances, thus verifying the effectiveness of the combined template sets centric approach proposed in this study.

  7. New Computational Approaches to Determining the Astronomical Vessel Position Based on the Sumner Line

    Directory of Open Access Journals (Sweden)

    Chen Chih-Li

    2015-01-01

    Full Text Available In this paper two new approaches are developed to calculate the astronomical vessel position (AVP. Basically, determining the AVP is originated from the spherical equal altitude circles (EACs concept; therefore, based on the Sumner line's idea, which implies the trial-and-error procedure in assumption, the AVP is determined by using the two proposed approaches. One consists in taking the great circle of spherical geometry to replace the EAC to fix the AVP and the other implements the straight line of the plane geometry to replace the EAC to yield the AVP. To ensure the real AVP, both approaches choose the iteration scheme running in the assumed latitude interval to determine the final AVP. Several benchmark examples are demonstrated to show that the proposed approaches are more accurate and universal as compared with those conventional approaches used in the maritime education or practical operations.

  8. Medical imaging in clinical applications algorithmic and computer-based approaches

    CERN Document Server

    Bhateja, Vikrant; Hassanien, Aboul

    2016-01-01

    This volume comprises of 21 selected chapters, including two overview chapters devoted to abdominal imaging in clinical applications supported computer aided diagnosis approaches as well as different techniques for solving the pectoral muscle extraction problem in the preprocessing part of the CAD systems for detecting breast cancer in its early stage using digital mammograms. The aim of this book is to stimulate further research in medical imaging applications based algorithmic and computer based approaches and utilize them in real-world clinical applications. The book is divided into four parts, Part-I: Clinical Applications of Medical Imaging, Part-II: Classification and clustering, Part-III: Computer Aided Diagnosis (CAD) Tools and Case Studies and Part-IV: Bio-inspiring based Computer Aided diagnosis techniques. .

  9. Mathematics of shape description a morphological approach to image processing and computer graphics

    CERN Document Server

    Ghosh, Pijush K

    2009-01-01

    Image processing problems are often not well defined because real images are contaminated with noise and other uncertain factors. In Mathematics of Shape Description, the authors take a mathematical approach to address these problems using the morphological and set-theoretic approach to image processing and computer graphics by presenting a simple shape model using two basic shape operators called Minkowski addition and decomposition. This book is ideal for professional researchers and engineers in Information Processing, Image Measurement, Shape Description, Shape Representation and Computer Graphics. Post-graduate and advanced undergraduate students in pure and applied mathematics, computer sciences, robotics and engineering will also benefit from this book.  Key FeaturesExplains the fundamental and advanced relationships between algebraic system and shape description through the set-theoretic approachPromotes interaction of image processing geochronology and mathematics in the field of algebraic geometryP...

  10. Students' Tracking Data: an Approach for Efficiently Tracking Computer Mediated Communications in Distance Learning

    OpenAIRE

    May, Madeth; George, Sébastien; Prévôt, Patrick

    2008-01-01

    Full paper, 5 pages; International audience; This paper presents an approach for closely observing the different levels of Human and Computer Interactions during student's communication activities on Computer Mediated Communication (CMC) tools. It focuses on how to efficiently track learners' activities, along with their outputs, and to exploit the collected tracking data in order to assist participants, both learners and teachers, in the learning process. Three experiments using the system a...

  11. Approaches to Transient Computing for Energy Harvesting Systems: A Quantitative Evaluation

    OpenAIRE

    Rodriguez, Alberto; Balsamo , Domenico; Das(2), Anup; Weddell, Alex S.; Brunelli, Davide; Al-Hashimi, Bashir; Merrett, Geoff V

    2015-01-01

    Systems operating from harvested sources typically integrate batteries or supercapacitors to smooth out rapid changes in harvester output. However, such energy storage devices require time for charging and increase the size, mass and cost of the system. A recent approach to address this is to power systems directly from the harvester output, termed transient computing. To solve the problem of having to restart computation from the start due to power-cycles, a number of techniques have been pr...

  12. Computer based virtual reality approach towards its application in an accidental emergency at nuclear power plant

    International Nuclear Information System (INIS)

    Virtual reality is a computer based system for creating and receiving virtual world. As an emerging branch of computer discipline, this approach is extensively expanding and widely used in variety of industries such as national defence, research, engineering, medicine and air navigation. The author intends to present the fundamentals of virtual reality, in attempt to study some interested aspects for use in nuclear power emergency planning

  13. Non-invasive computation of aortic pressure maps: a phantom-based study of two approaches

    Science.gov (United States)

    Delles, Michael; Schalck, Sebastian; Chassein, Yves; Müller, Tobias; Rengier, Fabian; Speidel, Stefanie; von Tengg-Kobligk, Hendrik; Kauczor, Hans-Ulrich; Dillmann, Rüdiger; Unterhinninghofen, Roland

    2014-03-01

    Patient-specific blood pressure values in the human aorta are an important parameter in the management of cardiovascular diseases. A direct measurement of these values is only possible by invasive catheterization at a limited number of measurement sites. To overcome these drawbacks, two non-invasive approaches of computing patient-specific relative aortic blood pressure maps throughout the entire aortic vessel volume are investigated by our group. The first approach uses computations from complete time-resolved, three-dimensional flow velocity fields acquired by phasecontrast magnetic resonance imaging (PC-MRI), whereas the second approach relies on computational fluid dynamics (CFD) simulations with ultrasound-based boundary conditions. A detailed evaluation of these computational methods under realistic conditions is necessary in order to investigate their overall robustness and accuracy as well as their sensitivity to certain algorithmic parameters. We present a comparative study of the two blood pressure computation methods in an experimental phantom setup, which mimics a simplified thoracic aorta. The comparative analysis includes the investigation of the impact of algorithmic parameters on the MRI-based blood pressure computation and the impact of extracting pressure maps in a voxel grid from the CFD simulations. Overall, a very good agreement between the results of the two computational approaches can be observed despite the fact that both methods used completely separate measurements as input data. Therefore, the comparative study of the presented work indicates that both non-invasive pressure computation methods show an excellent robustness and accuracy and can therefore be used for research purposes in the management of cardiovascular diseases.

  14. Portable Parallel CORBA Objects: an Approach to Combine Parallel and Distributed Programming for Grid Computing

    OpenAIRE

    Denis, Alexandre; Pérez, Christian; Priol, Thierry

    2001-01-01

    With the availability of Computational Grids, new kinds of applications that will soon emerge will raise the problem of how to program them on such computing systems. In this paper, we advocate a programming model that is based on a combination of parallel and distributed programming models. Compared to previous approaches, this work aims at bringing SPMD programming into CORBA. For example, we want to interconnect two MPI codes by CORBA without modifying MPI or CORBA. We show that such an ap...

  15. Computer-based diagnostic and prognostic approaches in medical research using brain MRI

    OpenAIRE

    Weygandt, Martin

    2016-01-01

    Die vorliegende Habilitationsschrift zu „Computer-based diagnostic and prognostic approaches in medical research using brain MRI“ ist in zwei Abschnitte gegliedert. Konkret wird im ersten Abschnitt eine Übersicht über verschiedene Aspekte des Computer- und MRT-basierten Vorhersageansatzes gegeben. Im zweiten Abschnitt werden die Artikel aus diesem Feld beschrieben, die ich für die Habilitation eingereicht habe. Konkret beginnt der erste Abschnitt der Habilitationsschrift damit, das grundlege...

  16. Parallel kd-Tree Based Approach for Computing the Prediction Horizon Using Wolf’s Method

    OpenAIRE

    Águila, J. J.; Arias, E.; Artigao, M. M.; Miralles, J.J.

    2015-01-01

    In different fields of science and engineering, a model of a given underlying dynamical system can be obtained by means of measurement data records called time series. This model becomes very important to understand the original system behaviour and to predict the future values of that system. From the model, parameters such as the prediction horizon can be computed to obtain the point where the prediction becomes useless. In this work, a new parallel kd-tree based approach for computing the ...

  17. A Computational Approach to Essential and Nonessential Objective Functions in Linear Multicriteria Optimization

    CERN Document Server

    Malinowska, Agnieszka B

    2007-01-01

    The question of obtaining well-defined criteria for multiple criteria decision making problems is well-known. One of the approaches dealing with this question is the concept of nonessential objective function. A certain objective function is called nonessential if the set of efficient solutions is the same both with or without that objective function. In this paper we put together two methods for determining nonessential objective functions. A computational implementation is done using a computer algebra system.

  18. A Crisis Management Approach To Mission Survivability In Computational Multi-Agent Systems

    Directory of Open Access Journals (Sweden)

    Aleksander Byrski

    2010-01-01

    Full Text Available In this paper we present a biologically-inspired approach for mission survivability (consideredas the capability of fulfilling a task such as computation that allows the system to be aware ofthe possible threats or crises that may arise. This approach uses the notion of resources usedby living organisms to control their populations.We present the concept of energetic selectionin agent-based evolutionary systems as well as the means to manipulate the configuration ofthe computation according to the crises or user’s specific demands.

  19. A radial basis function network approach for the computation of inverse continuous time variant functions.

    Science.gov (United States)

    Mayorga, René V; Carrera, Jonathan

    2007-06-01

    This Paper presents an efficient approach for the fast computation of inverse continuous time variant functions with the proper use of Radial Basis Function Networks (RBFNs). The approach is based on implementing RBFNs for computing inverse continuous time variant functions via an overall damped least squares solution that includes a novel null space vector for singularities prevention. The singularities avoidance null space vector is derived from developing a sufficiency condition for singularities prevention that conduces to establish some characterizing matrices and an associated performance index.

  20. A Workflow-Forecast Approach To The Task Scheduling Problem In Distributed Computing Systems

    OpenAIRE

    Gritsenko, Andrey

    2013-01-01

    The aim of this paper is to provide a description of deep-learning-based scheduling approach for academic-purpose high-performance computing systems. The share of academic-purpose distributed computing systems (DCS) reaches 17.4 percents amongst TOP500 supercomputer sites (15.6 percents in performance scale) that makes them a valuable object of research. The core of this approach is to predict the future workflow of the system depending on the previously submitted tasks using deep learning al...

  1. A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises

    DEFF Research Database (Denmark)

    Kuada, Eric; Olesen, Henning

    2011-01-01

    This paper proposes a social network approach to the provisioning and management of cloud computing services termed Opportunistic Cloud Computing Services (OCCS), for enterprises; and presents the research issues that need to be addressed for its implementation. We hypothesise that OCCS...... will facilitate the adoption process of cloud computing services by enterprises. OCCS deals with the concept of enterprises taking advantage of cloud computing services to meet their business needs without having to pay or paying a minimal fee for the services. The OCCS network will be modelled and implemented...... as a social network of enterprises collaborating strategically for the provisioning and consumption of cloud computing services without entering into any business agreements. We conclude that it is possible to configure current cloud service technologies and management tools for OCCS but there is a need...

  2. A novel approach for implementing Steganography with computing power obtained by combining Cuda and Matlab

    CERN Document Server

    Patel, Samir B; Ambegaokar, Saumitra U

    2009-01-01

    With the current development of multiprocessor systems, strive for computing data on such processor have also increased exponentially. If the multi core processors are not fully utilized, then even though we have the computing power the speed is not available to the end users for their respective applications. In accordance to this, the users or application designers also have to design newer applications taking care of the computing infrastructure available within. Our approach is to use the CUDA (Compute Unified Device Architecture) as backend and MATLAB as the front end to design an application for implementing steganography. Steganography is the term used for hiding information in the cover object like Image, Audio or Video data. As the computing required for multimedia data is much more than the text information, we have been successful in implementing image Steganography with the help of technology for the next generation.

  3. Anharmonic-potential-effective-charge approach for computing Raman cross sections of a gas

    Science.gov (United States)

    Kutteh, Ramzi; van Zandt, L. L.

    1993-05-01

    An anharmonic-potential-effective-charge approach for computing relative Raman intensities of a gas is developed. The equations of motion are set up and solved for the driven anharmonic molecular vibrations. An explicit expression for the differential polarizability tensor is derived and its properties discussed. This expression is then used within the context of Placzek's theory [Handbuch der Radiologie (Akademische Verlagsgesellschaft, Leipzig, 1934), Vol. VI] to compute the Raman cross section and depolarization ratio of a gas. The computation is carried out for the small molecules CO2, CS2, SO2, and CCl4; results are compared with experimental measurements and discussed.

  4. A Computer-Aided FPS-Oriented Approach for Construction Briefing

    Institute of Scientific and Technical Information of China (English)

    Xiaochun Luo; Qiping Shen

    2008-01-01

    Function performance specification (FPS) is one of the value management (VM) techniques de- veloped for the explicit statement of optimum product definition. This technique is widely used in software engineering and manufacturing industry, and proved to be successful to perform product defining tasks. This paper describes an FPS-odented approach for construction briefing, which is critical to the successful deliv- ery of construction projects. Three techniques, i.e., function analysis system technique, shared space, and computer-aided toolkit, are incorporated into the proposed approach. A computer-aided toolkit is developed to facilitate the implementation of FPS in the briefing processes. This approach can facilitate systematic, ef- ficient identification, clarification, and representation of client requirements in trail running. The limitations of the approach and future research work are also discussed at the end of the paper.

  5. Accurate calculation of mutational effects on the thermodynamics of inhibitor binding to p38α MAP kinase: a combined computational and experimental study.

    Science.gov (United States)

    Zhu, Shun; Travis, Sue M; Elcock, Adrian H

    2013-07-01

    A major current challenge for drug design efforts focused on protein kinases is the development of drug resistance caused by spontaneous mutations in the kinase catalytic domain. The ubiquity of this problem means that it would be advantageous to develop fast, effective computational methods that could be used to determine the effects of potential resistance-causing mutations before they arise in a clinical setting. With this long-term goal in mind, we have conducted a combined experimental and computational study of the thermodynamic effects of active-site mutations on a well-characterized and high-affinity interaction between a protein kinase and a small-molecule inhibitor. Specifically, we developed a fluorescence-based assay to measure the binding free energy of the small-molecule inhibitor, SB203580, to the p38α MAP kinase and used it measure the inhibitor's affinity for five different kinase mutants involving two residues (Val38 and Ala51) that contact the inhibitor in the crystal structure of the inhibitor-kinase complex. We then conducted long, explicit-solvent thermodynamic integration (TI) simulations in an attempt to reproduce the experimental relative binding affinities of the inhibitor for the five mutants; in total, a combined simulation time of 18.5 μs was obtained. Two widely used force fields - OPLS-AA/L and Amber ff99SB-ILDN - were tested in the TI simulations. Both force fields produced excellent agreement with experiment for three of the five mutants; simulations performed with the OPLS-AA/L force field, however, produced qualitatively incorrect results for the constructs that contained an A51V mutation. Interestingly, the discrepancies with the OPLS-AA/L force field could be rectified by the imposition of position restraints on the atoms of the protein backbone and the inhibitor without destroying the agreement for other mutations; the ability to reproduce experiment depended, however, upon the strength of the restraints' force constant

  6. Artificial intelligence and tutoring systems computational and cognitive approaches to the communication of knowledge

    CERN Document Server

    Wenger, Etienne

    2014-01-01

    Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge focuses on the cognitive approaches, methodologies, principles, and concepts involved in the communication of knowledge. The publication first elaborates on knowledge communication systems, basic issues, and tutorial dialogues. Concerns cover natural reasoning and tutorial dialogues, shift from local strategies to multiple mental models, domain knowledge, pedagogical knowledge, implicit versus explicit encoding of knowledge, knowledge communication, and practical and theoretic

  7. A COMPUTER-AIDED UNIFIED APPROACH TO MODELING PWM AND RESONANT CONVERTERS

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This letter puts forward a method of modeling for the steady-state and small signal dynamic analysis on PWM, quasi-resonant and series/(parallel) resonant switching converters based on pulse-waveform integral approach. As an example, PWM and quasi-resonant converters are used to discuss the principle of the approach. The results are compared with those in the relative literatures. Computer aided analysis are made to confirm the correctness.

  8. Numerical characterization of nonlinear dynamical systems using parallel computing: The role of GPUS approach

    Science.gov (United States)

    Fazanaro, Filipe I.; Soriano, Diogo C.; Suyama, Ricardo; Madrid, Marconi K.; Oliveira, José Raimundo de; Muñoz, Ignacio Bravo; Attux, Romis

    2016-08-01

    The characterization of nonlinear dynamical systems and their attractors in terms of invariant measures, basins of attractions and the structure of their vector fields usually outlines a task strongly related to the underlying computational cost. In this work, the practical aspects related to the use of parallel computing - specially the use of Graphics Processing Units (GPUS) and of the Compute Unified Device Architecture (CUDA) - are reviewed and discussed in the context of nonlinear dynamical systems characterization. In this work such characterization is performed by obtaining both local and global Lyapunov exponents for the classical forced Duffing oscillator. The local divergence measure was employed by the computation of the Lagrangian Coherent Structures (LCSS), revealing the general organization of the flow according to the obtained separatrices, while the global Lyapunov exponents were used to characterize the attractors obtained under one or more bifurcation parameters. These simulation sets also illustrate the required computation time and speedup gains provided by different parallel computing strategies, justifying the employment and the relevance of GPUS and CUDA in such extensive numerical approach. Finally, more than simply providing an overview supported by a representative set of simulations, this work also aims to be a unified introduction to the use of the mentioned parallel computing tools in the context of nonlinear dynamical systems, providing codes and examples to be executed in MATLAB and using the CUDA environment, something that is usually fragmented in different scientific communities and restricted to specialists on parallel computing strategies.

  9. Simple and accurate scheme to compute electrostatic interaction: zero-dipole summation technique for molecular system and application to bulk water.

    Science.gov (United States)

    Fukuda, Ikuo; Kamiya, Narutoshi; Yonezawa, Yasushige; Nakamura, Haruki

    2012-08-01

    The zero-dipole summation method was extended to general molecular systems, and then applied to molecular dynamics simulations of an isotropic water system. In our previous paper [I. Fukuda, Y. Yonezawa, and H. Nakamura, J. Chem. Phys. 134, 164107 (2011)], for evaluating the electrostatic energy of a classical particle system, we proposed the zero-dipole summation method, which conceptually prevents the nonzero-charge and nonzero-dipole states artificially generated by a simple cutoff truncation. Here, we consider the application of this scheme to molecular systems, as well as some fundamental aspects of general cutoff truncation protocols. Introducing an idea to harmonize the bonding interactions and the electrostatic interactions in the scheme, we develop a specific algorithm. As in the previous study, the resulting energy formula is represented by a simple pairwise function sum, enabling facile applications to high-performance computation. The accuracy of the electrostatic energies calculated by the zero-dipole summation method with the atom-based cutoff was numerically investigated, by comparison with those generated by the Ewald method. We obtained an electrostatic energy error of less than 0.01% at a cutoff length longer than 13 Å for a TIP3P isotropic water system, and the errors were quite small, as compared to those obtained by conventional truncation methods. The static property and the stability in an MD simulation were also satisfactory. In addition, the dielectric constants and the distance-dependent Kirkwood factors were measured, and their coincidences with those calculated by the particle mesh Ewald method were confirmed, although such coincidences are not easily attained by truncation methods. We found that the zero damping-factor gave the best results in a practical cutoff distance region. In fact, in contrast to the zero-charge scheme, the damping effect was insensitive in the zero-charge and zero-dipole scheme, in the molecular system we

  10. Structure and dynamics of near-threshold leptons driven by dipolar interactions: an accurate computational study for the DNA purinic bases

    Science.gov (United States)

    Carelli, Fabio; Gianturco, Francesco Antonio

    2016-06-01

    The interaction of low-energy scattering electrons/positrons with molecular targets characterized by a "supercritical" permanent dipole moment (≳2.0 D) presents special physical characteristics that affect their spatial distributions, around the nuclear network of the molecular partners, both above and below the energy thresholds. Such special states are described as either dipole scattering states (DSS) above thresholds or as dipole bound states (DBS) below thresholds. The details of their respective behaviour will be presented and discussed in this work in the case of the purinic DNA bases of adenine and guanine. The behavior of the additional electron, in particular, will be discussed in detail by providing new computational results that will be related to the findings from recent experiments on the same DNA bases, confirming the transient electron's behaviour surmised by them. This work is affectionately dedicated to Michael Allan on the occasion of his official retirement. We wish to this dear friend and outstanding scientist many years to come in the happy pursuit of his many scientific interests.Contribution to the Topical Issue "Advances in Positron and Electron Scattering", edited by Paulo Limao-Vieira, Gustavo Garcia, E. Krishnakumar, James Sullivan, Hajime Tanuma and Zoran Petrovic.

  11. Learning Probabilities in Computer Engineering by Using a Competency- and Problem-Based Approach

    Science.gov (United States)

    Khoumsi, Ahmed; Hadjou, Brahim

    2005-01-01

    Our department has redesigned its electrical and computer engineering programs by adopting a learning methodology based on competence development, problem solving, and the realization of design projects. In this article, we show how this pedagogical approach has been successfully used for learning probabilities and their application to computer…

  12. Advanced approaches to characterize the human intestinal microbiota by computational meta-analysis

    NARCIS (Netherlands)

    Nikkilä, J.; Vos, de W.M.

    2010-01-01

    GOALS: We describe advanced approaches for the computational meta-analysis of a collection of independent studies, including over 1000 phylogenetic array datasets, as a means to characterize the variability of human intestinal microbiota. BACKGROUND: The human intestinal microbiota is a complex micr

  13. Computer simulation of HTGR fuel microspheres using a Monte-Carlo statistical approach

    International Nuclear Information System (INIS)

    The concept and computational aspects of a Monte-Carlo statistical approach in relating structure of HTGR fuel microspheres to the uranium content of fuel samples have been verified. Results of the preliminary validation tests and the benefits to be derived from the program are summarized

  14. A Computer-Based Spatial Learning Strategy Approach That Improves Reading Comprehension and Writing

    Science.gov (United States)

    Ponce, Hector R.; Mayer, Richard E.; Lopez, Mario J.

    2013-01-01

    This article explores the effectiveness of a computer-based spatial learning strategy approach for improving reading comprehension and writing. In reading comprehension, students received scaffolded practice in translating passages into graphic organizers. In writing, students received scaffolded practice in planning to write by filling in graphic…

  15. Data Provenance and Management in Radio Astronomy: A Stream Computing Approach

    CERN Document Server

    Mahmoud, Mahmoud S; Biem, Alain; Elmegreen, Bruce; Gulyaev, Sergei

    2011-01-01

    New approaches for data provenance and data management (DPDM) are required for mega science projects like the Square Kilometer Array, characterized by extremely large data volume and intense data rates, therefore demanding innovative and highly efficient computational paradigms. In this context, we explore a stream-computing approach with the emphasis on the use of accelerators. In particular, we make use of a new generation of high performance stream-based parallelization middleware known as InfoSphere Streams. Its viability for managing and ensuring interoperability and integrity of signal processing data pipelines is demonstrated in radio astronomy. IBM InfoSphere Streams embraces the stream-computing paradigm. It is a shift from conventional data mining techniques (involving analysis of existing data from databases) towards real-time analytic processing. We discuss using InfoSphere Streams for effective DPDM in radio astronomy and propose a way in which InfoSphere Streams can be utilized for large antenna...

  16. Assessing the precision of high-throughput computational and laboratory approaches for the genome-wide identification of protein subcellular localization in bacteria

    Directory of Open Access Journals (Sweden)

    Brinkman Fiona SL

    2005-11-01

    Full Text Available Abstract Background Identification of a bacterial protein's subcellular localization (SCL is important for genome annotation, function prediction and drug or vaccine target identification. Subcellular fractionation techniques combined with recent proteomics technology permits the identification of large numbers of proteins from distinct bacterial compartments. However, the fractionation of a complex structure like the cell into several subcellular compartments is not a trivial task. Contamination from other compartments may occur, and some proteins may reside in multiple localizations. New computational methods have been reported over the past few years that now permit much more accurate, genome-wide analysis of the SCL of protein sequences deduced from genomes. There is a need to compare such computational methods with laboratory proteomics approaches to identify the most effective current approach for genome-wide localization characterization and annotation. Results In this study, ten subcellular proteome analyses of bacterial compartments were reviewed. PSORTb version 2.0 was used to computationally predict the localization of proteins reported in these publications, and these computational predictions were then compared to the localizations determined by the proteomics study. By using a combined approach, we were able to identify a number of contaminants and proteins with dual localizations, and were able to more accurately identify membrane subproteomes. Our results allowed us to estimate the precision level of laboratory subproteome studies and we show here that, on average, recent high-precision computational methods such as PSORTb now have a lower error rate than laboratory methods. Conclusion We have performed the first focused comparison of genome-wide proteomic and computational methods for subcellular localization identification, and show that computational methods have now attained a level of precision that is exceeding that of high

  17. Toward Accurate and Quantitative Comparative Metagenomics

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  18. Toward Accurate and Quantitative Comparative Metagenomics.

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  19. Digital approach to planning computer-guided surgery and immediate provisionalization in a partially edentulous patient.

    Science.gov (United States)

    Arunyanak, Sirikarn P; Harris, Bryan T; Grant, Gerald T; Morton, Dean; Lin, Wei-Shao

    2016-07-01

    This report describes a digital approach for computer-guided surgery and immediate provisionalization in a partially edentulous patient. With diagnostic data obtained from cone-beam computed tomography and intraoral digital diagnostic scans, a digital pathway of virtual diagnostic waxing, a virtual prosthetically driven surgical plan, a computer-aided design and computer-aided manufacturing (CAD/CAM) surgical template, and implant-supported screw-retained interim restorations were realized with various open-architecture CAD/CAM systems. The optional CAD/CAM diagnostic casts with planned implant placement were also additively manufactured to facilitate preoperative inspection of the surgical template and customization of the CAD/CAM-fabricated interim restorations. PMID:26868961

  20. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    Science.gov (United States)

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  1. An analytical approach to photonic reservoir computing - a network of SOA's - for noisy speech recognition

    Science.gov (United States)

    Salehi, Mohammad Reza; Abiri, Ebrahim; Dehyadegari, Louiza

    2013-10-01

    This paper seeks to investigate an approach of photonic reservoir computing for optical speech recognition on an examination isolated digit recognition task. An analytical approach in photonic reservoir computing is further drawn on to decrease time consumption, compared to numerical methods; which is very important in processing large signals such as speech recognition. It is also observed that adjusting reservoir parameters along with a good nonlinear mapping of the input signal into the reservoir, analytical approach, would boost recognition accuracy performance. Perfect recognition accuracy (i.e. 100%) can be achieved for noiseless speech signals. For noisy signals with 0-10 db of signal to noise ratios, however, the accuracy ranges observed varied between 92% and 98%. In fact, photonic reservoir application demonstrated 9-18% improvement compared to classical reservoir networks with hyperbolic tangent nodes.

  2. Bayesian approaches to spatial inference: Modelling and computational challenges and solutions

    Science.gov (United States)

    Moores, Matthew; Mengersen, Kerrie

    2014-12-01

    We discuss a range of Bayesian modelling approaches for spatial data and investigate some of the associated computational challenges. This paper commences with a brief review of Bayesian mixture models and Markov random fields, with enabling computational algorithms including Markov chain Monte Carlo (MCMC) and integrated nested Laplace approximation (INLA). Following this, we focus on the Potts model as a canonical approach, and discuss the challenge of estimating the inverse temperature parameter that controls the degree of spatial smoothing. We compare three approaches to addressing the doubly intractable nature of the likelihood, namely pseudo-likelihood, path sampling and the exchange algorithm. These techniques are applied to satellite data used to analyse water quality in the Great Barrier Reef.

  3. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  4. Accurate basis set truncation for wavefunction embedding

    Science.gov (United States)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  5. Hybrid approach for fast occlusion processing in computer-generated hologram calculation.

    Science.gov (United States)

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2016-07-10

    A hybrid approach for fast occlusion processing in computer-generated hologram calculation is studied in this paper. The proposed method is based on the combination of two commonly used approaches that complement one another: the point-source and wave-field approaches. By using these two approaches together, the proposed method thus takes advantage of both of them. In this method, the 3D scene is first sliced into several depth layers parallel to the hologram plane. Light scattered by the scene is then propagated and shielded from one layer to another using either a point-source or a wave-field approach according to a threshold criterion on the number of points within the layer. Finally, the hologram is obtained by computing the propagation of light from the nearest layer to the hologram plane. Experimental results reveal that the proposed method does not produce any visible artifact and outperforms both the point-source and wave-field approaches. PMID:27409327

  6. A Dynamic Bayesian Network Approach to Location Prediction in Ubiquitous Computing Environments

    Science.gov (United States)

    Lee, Sunyoung; Lee, Kun Chang; Cho, Heeryon

    The ability to predict the future contexts of users significantly improves service quality and user satisfaction in ubiquitous computing environments. Location prediction is particularly useful because ubiquitous computing environments can dynamically adapt their behaviors according to a user's future location. In this paper, we present an inductive approach to recognizing a user's location by establishing a dynamic Bayesian network model. The dynamic Bayesian network model has been evaluated with a set of contextual data collected from undergraduate students. The evaluation result suggests that a dynamic Bayesian network model offers significant predictive power.

  7. A new approach based on PSO algorithm to find good computational encoding sequences

    Institute of Scientific and Technical Information of China (English)

    Cui Guangzhao; Niu Yunyun; Wang Yanfeng; Zhang Xuncai; Pan Linqiang

    2007-01-01

    Computational encoding DNA sequence design is one of the most important steps in molecular computation. A lot of research work has been done to design reliable sequence library. A revised method based on the support system developed by Tanaka et al.is proposed here with different criteria to construct fitness function. Then we adapt particle swarm optimization (PSO) algorithm to our encoding problem. By using the new algorithm, a set of sequences with good quality is generated. The result also shows that our PSO- based approach could rapidly converge at the minimum level for an output of the simulation model. The celerity of the algorithm fits our requirements.

  8. An Approach to Computer Modeling of Geological Faults in 3D and an Application

    Institute of Scientific and Technical Information of China (English)

    ZHU Liang-feng; HE Zheng; PAN Xin; WU Xin-cai

    2006-01-01

    3D geological modeling, one of the most important applications in geosciences of 3D GIS, forms the basis and is a prerequisite for visualized representation and analysis of 3D geological data. Computer modeling of geological faults in 3D is currently a topical research area. Structural modeling techniques of complex geological entities containing reverse faults are discussed and a series of approaches are proposed. The geological concepts involved in computer modeling and visualization of geological fault in 3D are explained, the type of data of geological faults based on geological exploration is analyzed, and a normative database format for geological faults is designed. Two kinds of modeling approaches for faults are compared: a modeling technique of faults based on stratum recovery and a modeling technique of faults based on interpolation in subareas. A novel approach, called the Unified Modeling Technique for stratum and fault, is presented to solve the puzzling problems of reverse faults, syn-sedimentary faults and faults terminated within geological models. A case study of a fault model of bed rock in the Beijing Olympic Green District is presented in order to show the practical result of this method. The principle and the process of computer modeling of geological faults in 3D are discussed and a series of applied technical proposals established. It strengthens our profound comprehension of geological phenomena and the modeling approach, and establishes the basic techniques of 3D geological modeling for practical applications in the field of geosciences.

  9. Computational intelligence approach for NOx emissions minimization in a coal-fired utility boiler

    International Nuclear Information System (INIS)

    The current work presented a computational intelligence approach used for minimizing NOx emissions in a 300 MW dual-furnaces coal-fired utility boiler. The fundamental idea behind this work included NOx emissions characteristics modeling and NOx emissions optimization. First, an objective function aiming at estimating NOx emissions characteristics from nineteen operating parameters of the studied boiler was represented by a support vector regression (SVR) model. Second, four levels of primary air velocities (PA) and six levels of secondary air velocities (SA) were regulated by using particle swarm optimization (PSO) so as to achieve low NOx emissions combustion. To reduce the time demanding, a more flexible stopping condition was used to improve the computational efficiency without the loss of the quality of the optimization results. The results showed that the proposed approach provided an effective way to reduce NOx emissions from 399.7 ppm to 269.3 ppm, which was much better than a genetic algorithm (GA) based method and was slightly better than an ant colony optimization (ACO) based approach reported in the earlier work. The main advantage of PSO was that the computational cost, typical of less than 25 s under a PC system, is much less than those required for ACO. This meant the proposed approach would be more applicable to online and real-time applications for NOx emissions minimization in actual power plant boilers.

  10. Dystrophic calcification in muscles of legs in calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasia syndrome: Accurate evaluation of the extent with 99mTc-methylene diphosphonate single photon emission computed tomography/computed tomography

    International Nuclear Information System (INIS)

    We present the case of a 35-year-old man with calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly and telangiectasia variant scleroderma who presented with dysphagia, Raynaud's phenomenon and calf pain. 99mTc-methylene diphosphonate bone scintigraphy was performed to identify the extent of the calcification. It revealed extensive dystrophic calcification in the left thigh and bilateral legs which was involving the muscles and was well-delineated on single photon emission computed tomography/computed tomography. Calcinosis in scleroderma usually involves the skin but can be found in deeper periarticular tissues. Myopathy is associated with a poor prognosis

  11. Dystrophic calcification in muscles of legs in calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasia syndrome: Accurate evaluation of the extent with (99m)Tc-methylene diphosphonate single photon emission computed tomography/computed tomography.

    Science.gov (United States)

    Chakraborty, Partha Sarathi; Karunanithi, Sellam; Dhull, Varun Singh; Kumar, Kunal; Tripathi, Madhavi

    2015-01-01

    We present the case of a 35-year-old man with calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly and telangiectasia variant scleroderma who presented with dysphagia, Raynaud's phenomenon and calf pain. (99m)Tc-methylene diphosphonate bone scintigraphy was performed to identify the extent of the calcification. It revealed extensive dystrophic calcification in the left thigh and bilateral legs which was involving the muscles and was well-delineated on single photon emission computed tomography/computed tomography. Calcinosis in scleroderma usually involves the skin but can be found in deeper periarticular tissues. Myopathy is associated with a poor prognosis.

  12. Approach and tool for computer animation of fields in electrical apparatus

    International Nuclear Information System (INIS)

    The paper presents a technical approach and post-processing tool for creating and displaying computer animation. The approach enables handling of two- and three-dimensional physical field phenomena results obtained from finite element software or to display movement processes in electrical apparatus simulations. The main goal of this work is to extend auxiliary features built in general-purpose CAD software working in the Windows environment. Different storage techniques were examined and the one employing image capturing was chosen. The developed tool provides benefits of independent visualisation, creating scenarios and facilities for exporting animations in common file fon-nats for distribution on different computer platforms. It also provides a valuable educational tool.(Author)

  13. A Review of Intrusion Detection Technique by Soft Computing and Data Mining Approach

    Directory of Open Access Journals (Sweden)

    Aditya Shrivastava

    2013-09-01

    Full Text Available The growth of internet technology spread a large amount of data communication. The communication of data compromised network threats and security issues. The network threats and security issues raised a problem of data integrity and loss of data. For the purpose of data integrity and loss of data before 20 year Anderson developed a model of intrusion detection system. Initially intrusion detection system work on process of satirical frequency of audit system logs. Latter on this system improved by various researchers and apply some other approach such as data mining technique, neural network and expert system. Now in current research trend of intrusion detection system used soft computing approach such as fuzzy logic, genetic algorithm and machine learning. In this paper discuss some method of data mining and soft computing for the purpose of intrusion detection. Here used KDDCUP99 dataset used for performance evaluation for this technique.

  14. A Bayesian Approach for Parameter Estimation and Prediction using a Computationally Intensive Model

    CERN Document Server

    Higdon, Dave; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2014-01-01

    Bayesian methods have been very successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When non-linearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and non-standard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we pr...

  15. A Computational Approach for Analyzing and Detecting Emotions in Arabic Text

    Directory of Open Access Journals (Sweden)

    Amira F. El Gohary, Torky I. Sultan, Maha A. Hana, Mohamed M. El Dosoky

    2013-05-01

    Full Text Available The field of Affective Computing (AC expects to narrow the communicative gap between the highly emotional human and the emotionally challenged computer by developing computational systems that recognize and respond to the affective states of the user. Affect-sensitive interfaces are being developed in number of domains, including gaming, mental health, and learning technologies. Emotions are part of human life. Recently, interest has been growing among researchers to find ways of detecting subjective information used in blogs and other online social media. This paper concerned with the automatic detection of emotions in Arabic text. This construction is based on a moderate sized Arabic emotion lexicon used to annotate Arabic children stories for the six basic emotions: Joy, Fear, Sadness, Anger, Disgust, and Surprise.Our approach achieves 65% accuracy for emotion detection in Arabic text.

  16. GROUPING BASED USER DEMAND AWARE JOB SCHEDULING APPROACH FOR COMPUTATIONAL GRID

    Directory of Open Access Journals (Sweden)

    P.SURESH

    2012-12-01

    Full Text Available Grid Computing is a high performance computing that solves complicated tasks and provides powerful computing abilities. Scheduler is very much responsible for effective utilization of resources and less processing time. Most of the scheduling algorithms failed to consider user satisfaction and resource utilization. This paper introduces a new grouping based scheduling algorithm that takes user satisfaction into account. In this approach, grouping of fine grained jobs to coarse grained jobs and scheduling those coars grained jobs based on the deadline is done. The simulation is done using GridSim toolkit and the results have been compared with the userdemand aware scheduling Algorithm and the results show the user satisfaction is more and achieves better hit rate, processing time and makespan. Thus the grouping based user demand aware algorithm results in increased user satisfaction and better makespan and processing time.

  17. From music similarity to music recommendation : computational approaches based on audio features and metadata

    OpenAIRE

    Bogdanov, Dmitry

    2013-01-01

    In this work we focus on user modeling for music recommendation and develop algorithms for computational understanding and visualization of music preferences. Firstly, we propose a user model starting from an explicit set of music tracks provided by the user as evidence of his/her preferences. Secondly, we study approaches to music similarity, working solely on audio content and propose a number of novel measures working with timbral, temporal, tonal, and semantic information about music. Thi...

  18. Brain Computer Interfaces for Communication in Paralysis: a Clinical-Experimental Approach

    OpenAIRE

    Hinterberger, T.; F. Nijboer; Kübler, A; Matuz, T.; Furdea, A.; Mochty, U.; Jordan, M.; Lal, T.N; Hill, J.; MELLINGER, J.; Bensch, M.; Tangermann, M.; Widmann, G; Elger, C; Rosenstiel, W.

    2007-01-01

    An overview of different approaches to brain-computer interfaces (BCIs) developed in our laboratory is given. An important clinical application of BCIs is to enable communication or environmental control in severely paralyzed patients. The BCI 'Thought-Translation Device (TTD)' allows verbal communication through the voluntary self-regulation of brain signals (e.g., slow cortical potentials (SCPs)), which is achieved by operant feedback train-ing. Humans' ability to self-regulate their SCPs i...

  19. Computational modeling and epidemiologic approaches: a new section of the journal of translational medicine

    Directory of Open Access Journals (Sweden)

    Liebman Michael N

    2012-10-01

    Full Text Available Abstract A new section of the Journal of Translational Medicine is being introduced to encourage rapid communication of methods and results that utilize computational modeling and epidemiologic approaches in translational medicine. The focus will be on population-based studies that extend towards more molecular level analysis. Submission of studies involving methods development is encouraged where actual application and results can be shown in the healthcare and life sciences domains.

  20. The Human Adult Skeletal Muscle Transcriptional Profile Reconstructed by a Novel Computational Approach

    OpenAIRE

    Bortoluzzi, Stefania; d'Alessi, Fabio; Romualdi, Chiara; Danieli, Gian Antonio

    2000-01-01

    By applying a novel software tool, information on 4080 UniGene clusters was retrieved from three adult human skeletal muscle cDNA libraries, which were selected for being neither normalized nor subtracted. Reconstruction of a transcriptional profile of the corresponding tissue was attempted by a computational approach, classifying each transcript according to its level of expression. About 25% of the transcripts accounted for about 80% of the detected transcriptional activity, whereas most ge...