Directory of Open Access Journals (Sweden)
Heng-Yi Su
2016-11-01
Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.
Mehmani, Yashar; Oostrom, Mart; Balhoff, Matthew T.
2014-03-01
Several approaches have been developed in the literature for solving flow and transport at the pore scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect-mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and validated against micromodel experiments; excellent matches were obtained across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3-D disordered granular media.
Accurate computation of Mathieu functions
Bibby, Malcolm M
2013-01-01
This lecture presents a modern approach for the computation of Mathieu functions. These functions find application in boundary value analysis such as electromagnetic scattering from elliptic cylinders and flat strips, as well as the analogous acoustic and optical problems, and many other applications in science and engineering. The authors review the traditional approach used for these functions, show its limitations, and provide an alternative ""tuned"" approach enabling improved accuracy and convergence. The performance of this approach is investigated for a wide range of parameters and mach
Chang, Chih-Hao; Liou, Meng-Sing
2007-07-01
In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations . Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM +-up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion. However, conservative form is lost in these balance equations when considering each individual phase; in fact, the interactions that exist simultaneously in both phases manifest themselves as nonconservative terms.
Schirle, M; Weinschenk, T; Stevanović, S
2001-11-01
The identification of T cell epitopes from immunologically relevant antigens remains a critical step in the development of vaccines and methods for monitoring of T cell responses. This review presents an overview of strategies that employ computer algorithms for the selection of candidate peptides from defined proteins and subsequent verification of their in vivo relevance by experimental approaches. Several computer algorithms are currently being used for epitope prediction of various major histocompatibility complex (MHC) class I and II molecules, based either on the analysis of natural MHC ligands or on the binding properties of synthetic peptides. Moreover, the analysis of proteasomal digests of peptides and whole proteins has led to the development of algorithms for the prediction of proteasomal cleavages. In order to verify the generation of the predicted peptides during antigen processing in vivo as well as their immunogenic potential, several experimental approaches have been pursued in the recent past. Mass spectrometry-based bioanalytical approaches have been used specifically to detect predicted peptides among isolated natural ligands. Other strategies employ various methods for the stimulation of primary T cell responses against the predicted peptides and subsequent testing of the recognition pattern towards target cells that express the antigen.
DEFF Research Database (Denmark)
Zimmermann, Ralf
2014-01-01
) in an offline stage. The claimed trajectory is obtained locally by interpolating the given local subspaces considered as sample points in the Grassmann manifold. It is shown that the manifold interpolation technique is subject to certain restrictions. Moreover, it turns out that the application of computing...... under a sinusoidal pitching motion....
Wiktor, Julia; Jomard, Gérald; Torrent, Marc
2015-09-01
Many techniques have been developed in the past in order to compute positron lifetimes in materials from first principles. However, there is still a lack of a fast and accurate self-consistent scheme that could handle accurately the forces acting on the ions induced by the presence of the positron. We will show in this paper that we have reached this goal by developing the two-component density functional theory within the projector augmented-wave (PAW) method in the open-source code abinit. This tool offers the accuracy of the all-electron methods with the computational efficiency of the plane-wave ones. We can thus deal with supercells that contain few hundreds to thousands of atoms to study point defects as well as more extended defects clusters. Moreover, using the PAW basis set allows us to use techniques able to, for instance, treat strongly correlated systems or spin-orbit coupling, which are necessary to study heavy elements, such as the actinides or their compounds.
Accurate emulators for large-scale computer experiments
Haaland, Ben; 10.1214/11-AOS929
2012-01-01
Large-scale computer experiments are becoming increasingly important in science. A multi-step procedure is introduced to statisticians for modeling such experiments, which builds an accurate interpolator in multiple steps. In practice, the procedure shows substantial improvements in overall accuracy, but its theoretical properties are not well established. We introduce the terms nominal and numeric error and decompose the overall error of an interpolator into nominal and numeric portions. Bounds on the numeric and nominal error are developed to show theoretically that substantial gains in overall accuracy can be attained with the multi-step approach.
Accurate paleointensities - the multi-method approach
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Accurate atom-mapping computation for biochemical reactions.
Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D
2012-11-26
The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.
Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine
2009-01-01
We built a natural language processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…
Kirk, David Blair
This thesis develops an engineering practice and design methodology to enable us to use CMOS analog VLSI chips to perform more accurate and precise computation. These techniques form the basis of an approach that permits us to build computer graphics and neural network applications using analog VLSI. The nature of the design methodology focuses on defining goals for circuit behavior to be met as part of the design process. To increase the accuracy of analog computation, we develop techniques for creating compensated circuit building blocks, where compensation implies the cancellation of device variations, offsets, and nonlinearities. These compensated building blocks can be used as components in larger and more complex circuits, which can then also be compensated. To this end, we develop techniques for automatically determining appropriate parameters for circuits, using constrained optimization. We also fabricate circuits that implement multi-dimensional gradient estimation for a gradient descent optimization technique. The parameter-setting and optimization tools allow us to automatically choose values for compensating our circuit building blocks, based on our goals for the circuit performance. We can also use the techniques to optimize parameters for larger systems, applying the goal-based techniques hierarchically. We also describe a set of thought experiments involving circuit techniques for increasing the precision of analog computation. Our engineering design methodology is a step toward easier use of analog VLSI to solve problems in computer graphics and neural networks. We provide data measured from compensated multipliers built using these design techniques. To demonstrate the feasibility of using analog VLSI for more quantitative computation, we develop small applications using the goal-based design approach and compensated components. Finally, we conclude by discussing the expected significance of this work for the wider use of analog VLSI for
Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity
Chae, B.; Ichikawa, Y.; Kim, Y.
2003-12-01
Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of
Fast and accurate method for computing ATC with voltage stability
Eidiani, M; Vahedi, E
2002-01-01
Order 889 mandated each control area to computer ATC (Available Transfer Capability) and post them on a communication system called the Open Access Same-time Information System (OASIS). Approaches of computing ATC can be divided into the following groups: Static and Dynamic methods. This paper presents a fast method for ATC calculations with voltage stability termination criteria. We use estimation of the determinant of Jacobian matrix for assessment of voltage stability. This method is compared with these methods: different between energy in SEP (Stable Equilibrium Point) and UEP (Unstable Equilibrium Point), ts index of Dr.Chiang and continuation power flow. The idea are demonstrated on 2, 3, 7 (CIGRE), 10, 30 (IEEE) and 145 bus (Iowa State University).
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
A Distributed Weighted Voting Approach for Accurate Eye Center Estimation
Directory of Open Access Journals (Sweden)
Gagandeep Singh
2013-05-01
Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism. First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763
Computational approaches to vision
Barrow, H. G.; Tenenbaum, J. M.
1986-01-01
Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.
An Integrative Approach to Accurate Vehicle Logo Detection
Directory of Open Access Journals (Sweden)
Hao Pan
2013-01-01
required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.
Compiler for Fast, Accurate Mathematical Computing on Integer Processors Project
National Aeronautics and Space Administration — The proposers will develop a computer language compiler to enable inexpensive, low-power, integer-only processors to carry our mathematically-intensive comptutations...
Towards accurate quantum simulations of large systems with small computers.
Yang, Yonggang
2017-01-24
Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems.
Towards accurate quantum simulations of large systems with small computers
Yang, Yonggang
2017-01-01
Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems. PMID:28117366
Puzzarini, Cristina; Biczysko, Malgorzata; Barone, Vincenzo; Peña, Isabel; Cabezas, Carlos; Alonso, José L.
2015-01-01
The computational composite scheme purposely set up for accurately describing the electronic structure and spectroscopic properties of small biomolecules has been applied to the first study of the rotational spectrum of 2-thiouracil. The experimental investigation was made possible thanks to the combination of the laser ablation technique with Fourier Transform Microwave spectrometers. The joint experimental – computational study allowed us to determine accurate molecular structure and spectroscopic properties for the title molecule, but more important, it demonstrates a reliable approach for the accurate investigation of isolated small biomolecules. PMID:24002739
Energy Technology Data Exchange (ETDEWEB)
Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)
1997-04-01
It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.
An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates
Directory of Open Access Journals (Sweden)
Usman Khan
2014-04-01
Full Text Available Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h. The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication.
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Tamma, Kumar K.; Railkar, Sudhir B.
1988-01-01
This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.
A new approach to constructing efficient stiffly accurate EPIRK methods
Rainwater, G.; Tokman, M.
2016-10-01
The structural flexibility of the exponential propagation iterative methods of Runge-Kutta type (EPIRK) enables construction of particularly efficient exponential time integrators. While the EPIRK methods have been shown to perform well on stiff problems, all of the schemes proposed up to now have been derived using classical order conditions. In this paper we extend the stiff order conditions and the convergence theory developed for the exponential Rosenbrock methods to the EPIRK integrators. We derive stiff order conditions for the EPIRK methods and develop algorithms to solve them to obtain specific schemes. Moreover, we propose a new approach to constructing particularly efficient EPIRK integrators that are optimized to work with an adaptive Krylov algorithm. We use a set of numerical examples to illustrate the computational advantages that the newly constructed EPIRK methods offer compared to previously proposed exponential integrators.
Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project
National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...
On accurate computations of bound state properties in three- and four-electron atomic systems
Frolov, Alexei M
2016-01-01
Results of accurate computations of bound states in three- and four-electron atomic systems are discussed. Bound state properties of the four-electron lithium ion Li$^{-}$ in its ground $2^{2}S-$state are determined from the results of accurate, variational computations. We also consider a closely related problem of accurate numerical evaluation of the half-life of the beryllium-7 isotope. This problem is of paramount importance for modern radiochemistry.
A programming approach to computability
Kfoury, A J; Arbib, Michael A
1982-01-01
Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...
Computational approaches to energy materials
Catlow, Richard; Walsh, Aron
2013-01-01
The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process. Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the
Ahmed, Ahfaz
2015-03-01
Gasoline is the most widely used fuel for light duty automobile transportation, but its molecular complexity makes it intractable to experimentally and computationally study the fundamental combustion properties. Therefore, surrogate fuels with a simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating surrogates for FACE (fuels for advanced combustion engines) gasolines A and C by combining regression modeling with physical and chemical kinetics simulations. The computational methodology integrates simulation tools executed across different software platforms. Initially, the palette of surrogate species and carbon types for the target fuels were determined from a detailed hydrocarbon analysis (DHA). A regression algorithm implemented in MATLAB was linked to REFPROP for simulation of distillation curves and calculation of physical properties of surrogate compositions. The MATLAB code generates a surrogate composition at each iteration, which is then used to automatically generate CHEMKIN input files that are submitted to homogeneous batch reactor simulations for prediction of research octane number (RON). The regression algorithm determines the optimal surrogate composition to match the fuel properties of FACE A and C gasoline, specifically hydrogen/carbon (H/C) ratio, density, distillation characteristics, carbon types, and RON. The optimal surrogate fuel compositions obtained using the present computational approach was compared to the real fuel properties, as well as with surrogate compositions available in the literature. Experiments were conducted within a Cooperative Fuels Research (CFR) engine operating under controlled autoignition (CAI) mode to compare the formulated surrogates against the real fuels. Carbon monoxide measurements indicated that the proposed surrogates
Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights
Hale, Nicholas
2013-03-06
An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton\\'s root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.
GRID COMPUTING AND CHECKPOINT APPROACH
Directory of Open Access Journals (Sweden)
Pankaj gupta
2011-05-01
Full Text Available Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation or problem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. In this paper we investigate the different techniques of fault tolerance which are used in many real time distributed systems. The main focus is on types of fault occurring in the system, fault detection techniques and the recovery techniques used. A fault can occur due to link failure, resource failure or by any other reason is to be tolerated for working the system smoothly and accurately. These faults can be detected and recovered by many techniques used accordingly. An appropriate fault detector can avoid loss due to system crash and reliable fault tolerance technique can save from system failure. This paper provides how these methods are applied to detect and tolerate faults from various Real Time Distributed Systems. The advantages of utilizing the check pointing functionality are obvious; however so far the Grid community has notdeveloped a widely accepted standard that would allow the Gridenvironment to consciously utilize low level check pointing packages.Therefore, such a standard named Grid Check pointing Architecture isbeing designed. The fault tolerance mechanism used here sets the jobcheckpoints based on the resource failure rate. If resource failureoccurs, the job is restarted from its last successful state using acheckpoint file from another grid resource. A critical aspect for anautomatic recovery is the availability of checkpoint files. A strategy to increase the availability of checkpoints is replication. Grid is a form distributed computing mainly to virtualizes and utilize geographically distributed idle resources. A grid is a distributed computational and storage environment often composed of
Computational Approaches to Vestibular Research
Ross, Muriel D.; Wade, Charles E. (Technical Monitor)
1994-01-01
The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and
Computer Networks A Systems Approach
Peterson, Larry L
2011-01-01
This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur
2014-10-08
models to compute accurately the molecular interactions between a mobile or stationary phase and a target substrate or analyte , which are fundamental...mobile or stationary phase and a target substrate or analyte , which are fundamental to diverse technologies, e.g., sensor or separation design. With...D. G., New Orleans, LA, April 9, 2013. 223rd Electrochemical Society Meeting, Continuum Solvation Models for Computational Electrochemistry
Efficient and accurate P-value computation for Position Weight Matrices
Directory of Open Access Journals (Sweden)
Varré Jean-Stéphane
2007-12-01
Full Text Available Abstract Background Position Weight Matrices (PWMs are probabilistic representations of signals in sequences. They are widely used to model approximate patterns in DNA or in protein sequences. The usage of PWMs needs as a prerequisite to knowing the statistical significance of a word according to its score. This is done by defining the P-value of a score, which is the probability that the background model can achieve a score larger than or equal to the observed value. This gives rise to the following problem: Given a P-value, find the corresponding score threshold. Existing methods rely on dynamic programming or probability generating functions. For many examples of PWMs, they fail to give accurate results in a reasonable amount of time. Results The contribution of this paper is two fold. First, we study the theoretical complexity of the problem, and we prove that it is NP-hard. Then, we describe a novel algorithm that solves the P-value problem efficiently. The main idea is to use a series of discretized score distributions that improves the final result step by step until some convergence criterion is met. Moreover, the algorithm is capable of calculating the exact P-value without any error, even for matrices with non-integer coefficient values. The same approach is also used to devise an accurate algorithm for the reverse problem: finding the P-value for a given score. Both methods are implemented in a software called TFM-PVALUE, that is freely available. Conclusion We have tested TFM-PVALUE on a large set of PWMs representing transcription factor binding sites. Experimental results show that it achieves better performance in terms of computational time and precision than existing tools.
Institute of Scientific and Technical Information of China (English)
F. Djeffal; A. Ferdi; M. Chahdi
2012-01-01
The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design.To develop a physical model for extremely scaled DG MOSFETs,the drain current in the channel must be accurately determined under the application of drain and gate voltages.However,modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent,quantum computations ).Therefore,new methods and techniques are required to overcome these constraints.In this paper,a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs.The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design.The approach is general and thus is suitable for any type ofnanoscale structure investigation problems in the nanotechnology industry.
Computational approaches for drug discovery.
Hung, Che-Lun; Chen, Chi-Chun
2014-09-01
Cellular proteins are the mediators of multiple organism functions being involved in physiological mechanisms and disease. By discovering lead compounds that affect the function of target proteins, the target diseases or physiological mechanisms can be modulated. Based on knowledge of the ligand-receptor interaction, the chemical structures of leads can be modified to improve efficacy, selectivity and reduce side effects. One rational drug design technology, which enables drug discovery based on knowledge of target structures, functional properties and mechanisms, is computer-aided drug design (CADD). The application of CADD can be cost-effective using experiments to compare predicted and actual drug activity, the results from which can used iteratively to improve compound properties. The two major CADD-based approaches are structure-based drug design, where protein structures are required, and ligand-based drug design, where ligand and ligand activities can be used to design compounds interacting with the protein structure. Approaches in structure-based drug design include docking, de novo design, fragment-based drug discovery and structure-based pharmacophore modeling. Approaches in ligand-based drug design include quantitative structure-affinity relationship and pharmacophore modeling based on ligand properties. Based on whether the structure of the receptor and its interaction with the ligand are known, different design strategies can be seed. After lead compounds are generated, the rule of five can be used to assess whether these have drug-like properties. Several quality validation methods, such as cost function analysis, Fisher's cross-validation analysis and goodness of hit test, can be used to estimate the metrics of different drug design strategies. To further improve CADD performance, multi-computers and graphics processing units may be applied to reduce costs.
Fuzzy multiple linear regression: A computational approach
Juang, C. H.; Huang, X. H.; Fleming, J. W.
1992-01-01
This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.
Computational approach to Riemann surfaces
Klein, Christian
2011-01-01
This volume offers a well-structured overview of existent computational approaches to Riemann surfaces and those currently in development. The authors of the contributions represent the groups providing publically available numerical codes in this field. Thus this volume illustrates which software tools are available and how they can be used in practice. In addition examples for solutions to partial differential equations and in surface theory are presented. The intended audience of this book is twofold. It can be used as a textbook for a graduate course in numerics of Riemann surfaces, in which case the standard undergraduate background, i.e., calculus and linear algebra, is required. In particular, no knowledge of the theory of Riemann surfaces is expected; the necessary background in this theory is contained in the Introduction chapter. At the same time, this book is also intended for specialists in geometry and mathematical physics applying the theory of Riemann surfaces in their research. It is the first...
Computer-based personality judgments are more accurate than those made by humans.
Youyou, Wu; Kosinski, Michal; Stillwell, David
2015-01-27
Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.
Energy Technology Data Exchange (ETDEWEB)
Drmac, Z. [Univ. of Colorado, Boulder, CO (United States). Dept. of Computer Science
1997-07-01
In this paper the author considers how to compute the singular value decomposition (SVD) A = U{Sigma}V{sup {tau}} of A = [a{sub 1}, a{sub 2}] {element_of} R{sup mx2} accurately in floating point arithmetic. It is shown how to compute the Jacobi rotation V (the right singular vector matrix) and how to compute AV = U{Sigma} even if the floating point representation of V is the identity matrix. In the case (norm of (a{sub 1})){sub 2} {much_gt} (norm of (a{sub 2})){sub 2}, underflow can produce the identity matrix as the floating point value of V, even for a{sub 1}, a{sub 2} that are far from being mutually orthogonal. This can cause loss of accuracy and failure of convergence of the floating point implementation of the Jacobi method for computing the SVD. The modified Jacobi method recommended in this paper can be implemented as a reliable and highly accurate procedure for computing the SVD of general real matrices whenever the exact singular values do not exceed the underflow or overflow limits.
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
Genetic crossovers are predicted accurately by the computed human recombination map.
Directory of Open Access Journals (Sweden)
Pavel P Khil
2010-01-01
Full Text Available Hotspots of meiotic recombination can change rapidly over time. This instability and the reported high level of inter-individual variation in meiotic recombination puts in question the accuracy of the calculated hotspot map, which is based on the summation of past genetic crossovers. To estimate the accuracy of the computed recombination rate map, we have mapped genetic crossovers to a median resolution of 70 Kb in 10 CEPH pedigrees. We then compared the positions of crossovers with the hotspots computed from HapMap data and performed extensive computer simulations to compare the observed distributions of crossovers with the distributions expected from the calculated recombination rate maps. Here we show that a population-averaged hotspot map computed from linkage disequilibrium data predicts well present-day genetic crossovers. We find that computed hotspot maps accurately estimate both the strength and the position of meiotic hotspots. An in-depth examination of not-predicted crossovers shows that they are preferentially located in regions where hotspots are found in other populations. In summary, we find that by combining several computed population-specific maps we can capture the variation in individual hotspots to generate a hotspot map that can predict almost all present-day genetic crossovers.
Computer Algebra, Instrumentation and the Anthropological Approach
Monaghan, John
2007-01-01
This article considers research and scholarship on the use of computer algebra in mathematics education following the instrumentation and the anthropological approaches. It outlines what these approaches are, positions them with regard to other approaches, examines tensions between the two approaches and makes suggestions for how work in this…
Computational approaches for urban environments
Helbich, M; Jokar Arsanjani, J; Leitner, M
2015-01-01
This book aims to promote the synergistic usage of advanced computational methodologies in close relationship to geospatial information across cities of different scales. A rich collection of chapters subsumes current research frontiers originating from disciplines such as geography, urban planning,
A particle-tracking approach for accurate material derivative measurements with tomographic PIV
Novara, Matteo; Scarano, Fulvio
2013-08-01
The evaluation of the instantaneous 3D pressure field from tomographic PIV data relies on the accurate estimate of the fluid velocity material derivative, i.e., the velocity time rate of change following a given fluid element. To date, techniques that reconstruct the fluid parcel trajectory from a time sequence of 3D velocity fields obtained with Tomo-PIV have already been introduced. However, an accurate evaluation of the fluid element acceleration requires trajectory reconstruction over a relatively long observation time, which reduces random errors. On the other hand, simple integration and finite difference techniques suffer from increasing truncation errors when complex trajectories need to be reconstructed over a long time interval. In principle, particle-tracking velocimetry techniques (3D-PTV) enable the accurate reconstruction of single particle trajectories over a long observation time. Nevertheless, PTV can be reliably performed only at limited particle image number density due to errors caused by overlapping particles. The particle image density can be substantially increased by use of tomographic PIV. In the present study, a technique to combine the higher information density of tomographic PIV and the accurate trajectory reconstruction of PTV is proposed (Tomo-3D-PTV). The particle-tracking algorithm is applied to the tracers detected in the 3D domain obtained by tomographic reconstruction. The 3D particle information is highly sparse and intersection of trajectories is virtually impossible. As a result, ambiguities in the particle path identification over subsequent recordings are easily avoided. Polynomial fitting functions are introduced that describe the particle position in time with sequences based on several recordings, leading to the reduction in truncation errors for complex trajectories. Moreover, the polynomial regression approach provides a reduction in the random errors due to the particle position measurement. Finally, the acceleration
What is computation : An epistemic approach
Wiedermann, Jiří; van Leeuwen, Jan
2015-01-01
Traditionally, computations are seen as processes that transform information. Definitions of computation subsequently concentrate on a description of the mechanisms that lead to such processes. The bottleneck of this approach is twofold. First, it leads to a definition of computation that is too bro
An Accurate Approach to Large-Scale IP Traffic Matrix Estimation
Jiang, Dingde; Hu, Guangmin
This letter proposes a novel method of large-scale IP traffic matrix (TM) estimation, called algebraic reconstruction technique inference (ARTI), which is based on the partial flow measurement and Fratar model. In contrast to previous methods, ARTI can accurately capture the spatio-temporal correlations of TM. Moreover, ARTI is computationally simple since it uses the algebraic reconstruction technique. We use the real data from the Abilene network to validate ARTI. Simulation results show that ARTI can accurately estimate large-scale IP TM and track its dynamics.
Directory of Open Access Journals (Sweden)
Bryant Jamie
2011-11-01
Full Text Available Abstract Background Self report of smoking status is potentially unreliable in certain situations and in high-risk populations. This study aimed to determine the accuracy and acceptability of computer administered self-report of smoking status among a low socioeconomic (SES population. Methods Clients attending a community service organisation for welfare support were invited to complete a cross-sectional touch screen computer health survey. Following survey completion, participants were invited to provide a breath sample to measure exposure to tobacco smoke in expired air. Sensitivity, specificity, positive predictive value and negative predictive value were calculated. Results Three hundred and eighty three participants completed the health survey, and 330 (86% provided a breath sample. Of participants included in the validation analysis, 59% reported being a daily or occasional smoker. Sensitivity was 94.4% and specificity 92.8%. The positive and negative predictive values were 94.9% and 92.0% respectively. The majority of participants reported that the touch screen survey was both enjoyable (79% and easy (88% to complete. Conclusions Computer administered self report is both acceptable and accurate as a method of assessing smoking status among low SES smokers in a community setting. Routine collection of health information using touch-screen computer has the potential to identify smokers and increase provision of support and referral in the community setting.
Antenna arrays a computational approach
Haupt, Randy L
2010-01-01
This book covers a wide range of antenna array topics that are becoming increasingly important in wireless applications, particularly in design and computer modeling. Signal processing and numerical modeling algorithms are explored, and MATLAB computer codes are provided for many of the design examples. Pictures of antenna arrays and components provided by industry and government sources are presented with explanations of how they work. Antenna Arrays is a valuable reference for practicing engineers and scientists in wireless communications, radar, and remote sensing, and an excellent textbook for advanced antenna courses.
Immune based computer virus detection approaches
Institute of Scientific and Technical Information of China (English)
TAN Ying; ZHANG Pengtao
2013-01-01
The computer virus is considered one of the most horrifying threats to the security of computer systems worldwide.The rapid development of evasion techniques used in virus causes the signature based computer virus detection techniques to be ineffective.Many novel computer virus detection approaches have been proposed in the past to cope with the ineffectiveness,mainly classified into three categories:static,dynamic and heuristics techniques.As the natural similarities between the biological immune system (BIS),computer security system (CSS),and the artificial immune system (AIS) were all developed as a new prototype in the community of anti-virus research.The immune mechanisms in the BIS provide the opportunities to construct computer virus detection models that are robust and adaptive with the ability to detect unseen viruses.In this paper,a variety of classic computer virus detection approaches were introduced and reviewed based on the background knowledge of the computer virus history.Next,a variety of immune based computer virus detection approaches were also discussed in detail.Promising experimental results suggest that the immune based computer virus detection approaches were able to detect new variants and unseen viruses at lower false positive rates,which have paved a new way for the anti-virus research.
Computational approaches for systems metabolomics.
Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J
2016-06-01
Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field.
Accurate and efficient computation of nonlocal potentials based on Gaussian-sum approximation
Exl, Lukas; Mauser, Norbert J.; Zhang, Yong
2016-12-01
We introduce an accurate and efficient method for the numerical evaluation of nonlocal potentials, including the 3D/2D Coulomb, 2D Poisson and 3D dipole-dipole potentials. Our method is based on a Gaussian-sum approximation of the singular convolution kernel combined with a Taylor expansion of the density. Starting from the convolution formulation of the nonlocal potential, for smooth and fast decaying densities, we make a full use of the Fourier pseudospectral (plane wave) approximation of the density and a separable Gaussian-sum approximation of the kernel in an interval where the singularity (the origin) is excluded. The potential is separated into a regular integral and a near-field singular correction integral. The first is computed with the Fourier pseudospectral method, while the latter is well resolved utilizing a low-order Taylor expansion of the density. Both parts are accelerated by fast Fourier transforms (FFT). The method is accurate (14-16 digits), efficient (O (Nlog N) complexity), low in storage, easily adaptable to other different kernels, applicable for anisotropic densities and highly parallelizable.
Learning and geometry computational approaches
Smith, Carl
1996-01-01
The field of computational learning theory arose out of the desire to for mally understand the process of learning. As potential applications to artificial intelligence became apparent, the new field grew rapidly. The learning of geo metric objects became a natural area of study. The possibility of using learning techniques to compensate for unsolvability provided an attraction for individ uals with an immediate need to solve such difficult problems. Researchers at the Center for Night Vision were interested in solving the problem of interpreting data produced by a variety of sensors. Current vision techniques, which have a strong geometric component, can be used to extract features. However, these techniques fall short of useful recognition of the sensed objects. One potential solution is to incorporate learning techniques into the geometric manipulation of sensor data. As a first step toward realizing such a solution, the Systems Research Center at the University of Maryland, in conjunction with the C...
Cloud computing methods and practical approaches
Mahmood, Zaigham
2013-01-01
This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an
Li, Xiangrui; Lu, Zhong-Lin
2012-02-29
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect
Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number
Directory of Open Access Journals (Sweden)
Wang Xingyuan
2010-01-01
Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.
Sheng, Qiwei; Matthews, Thomas P; Xia, Jun; Zhu, Liren; Wang, Lihong V; Anastasio, Mark A
2015-01-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the absorbed optical energy density within tissue. When the imaging system employs conventional piezoelectric ultrasonic transducers, the ideal photoacoustic (PA) signals are degraded by the transducers' acousto-electric impulse responses (EIRs) during the measurement process. If unaccounted for, this can degrade the accuracy of the reconstructed image. In principle, the effect of the EIRs on the measured PA signals can be ameliorated via deconvolution; images can be reconstructed subsequently by application of a reconstruction method that assumes an idealized EIR. Alternatively, the effect of the EIR can be incorporated into an imaging model and implicitly compensated for during reconstruction. In either case, the efficacy of the correction can be limited by errors in the assumed EIRs. In this work, a joint optimization approach to PACT image r...
When do perturbative approaches accurately capture the dynamics of complex quantum systems?
Fruchtman, Amir; Lambert, Neill; Gauger, Erik M.
2016-06-01
Understanding the dynamics of higher-dimensional quantum systems embedded in a complex environment remains a significant theoretical challenge. While several approaches yielding numerically converged solutions exist, these are computationally expensive and often provide only limited physical insight. Here we address the question: when do more intuitive and simpler-to-compute second-order perturbative approaches provide adequate accuracy? We develop a simple analytical criterion and verify its validity for the case of the much-studied FMO dynamics as well as the canonical spin-boson model.
An Efficient Approach for Computing Silhouette Coefficients
Directory of Open Access Journals (Sweden)
Moh'd B. Al- Zoubi
2008-01-01
Full Text Available One popular approach for finding the best number of clusters (K in a data set is through computing the silhouette coefficients. The silhouette coefficients for different values of K, are first found and then the maximum value of these coefficients is chosen. However, computing the silhouette coefficient for different Ks is a very time consuming process. This is due to the amount of CPU time spent on distance calculations. A proposed approach to compute the silhouette coefficient quickly had been presented. The approach was based on decreasing the number of addition operations when computing distances. The results were efficient and more than 50% of the CPU time was achieved when applied to different data sets.
DEFF Research Database (Denmark)
Kepp, Kasper Planeta; Ooi, Bee Lean; Christensen, Hans Erik Mølager
2007-01-01
This work describes the computation and accurate reproduction of subtle shifts in reduction potentials for two mutants of the iron-sulfur protein Pyrococcus furiosus ferredoxin. The computational models involved only first-sphere ligands and differed with respect to one ligand, either acetate (as...
Toward exascale computing through neuromorphic approaches.
Energy Technology Data Exchange (ETDEWEB)
James, Conrad D.
2010-09-01
While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.
Accurate computation of surface stresses and forces with immersed boundary methods
Goza, Andres; Liska, Sebastian; Morley, Benjamin; Colonius, Tim
2016-09-01
Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is applied as a post-processing procedure, so that the convergence of the velocity field is unaffected. We demonstrate the efficacy of the method by computing stresses and forces that converge to the physical stresses and forces for several test problems.
Accurate computation and interpretation of spin-dependent properties in metalloproteins
Rodriguez, Jorge
2006-03-01
Nature uses the properties of open-shell transition metal ions to carry out a variety of functions associated with vital life processes. Mononuclear and binuclear iron centers, in particular, are intriguing structural motifs present in many heme and non-heme proteins. Hemerythrin and methane monooxigenase, for example, are members of the latter class whose diiron active sites display magnetic ordering. We have developed a computational protocol based on spin density functional theory (SDFT) to accurately predict physico-chemical parameters of metal sites in proteins and bioinorganic complexes which traditionally had only been determined from experiment. We have used this new methodology to perform a comprehensive study of the electronic structure and magnetic properties of heme and non-heme iron proteins and related model compounds. We have been able to predict with a high degree of accuracy spectroscopic (Mössbauer, EPR, UV-vis, Raman) and magnetization parameters of iron proteins and, at the same time, gained unprecedented microscopic understanding of their physico-chemical properties. Our results have allowed us to establish important correlations between the electronic structure, geometry, spectroscopic data, and biochemical function of heme and non- heme iron proteins.
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
Energy Technology Data Exchange (ETDEWEB)
Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)
2015-01-15
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0
Towards Lagrangian approach to quantum computations
Vlasov, A Yu
2003-01-01
In this work is discussed possibility and actuality of Lagrangian approach to quantum computations. Finite-dimensional Hilbert spaces used in this area provide some challenge for such consideration. The model discussed here can be considered as an analogue of Weyl quantization of field theory via path integral in L. D. Faddeev's approach. Weyl quantization is possible to use also in finite-dimensional case, and some formulas may be simply rewritten with change of integrals to finite sums. On the other hand, there are specific difficulties relevant to finite case. This work has some allusions with phase space models of quantum computations developed last time by different authors.
Computing Isolated Singular Solutions of Polynomial Systems Accurately: Case of Breadth One
Li, Nan
2010-01-01
We present a symbolic-numeric method to refine an approximate isolated singular solution $\\hat{\\mathbf{x}}=(\\hat{x}_{1}, \\ldots, \\hat{x}_{n})$ of a polynomial system $F=\\{f_1, \\ldots, f_n\\}$ when the Jacobian matrix of $F$ evaluated at $\\hat{\\mathbf{x}}$ has corank one approximately. Our new approach is based on the regularized Newton iteration and the computation of approximate Max Noether conditions satisfied at the singular solution. The size of matrices involved in our algorithm is bounded by $n \\times n$ and the algorithm converges quadratically if $\\hat{\\xx}$ is near the exact singular solution. The method has been implemented in Maple and can deal with both regular singularities and irregular singularities.
A novel fast and accurate pseudo-analytical simulation approach for MOAO
Gendron, É.
2014-08-04
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is
A novel fast and accurate pseudo-analytical simulation approach for MOAO
Gendron, É.; Charara, A.; Abdelfattah, A.; Gratadour, D.; Keyes, D.; Ltaief, H.; Morel, C.; Vidal, F.; Sevin, A.; Rousset, G.
2014-08-01
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is
Stable, accurate and efficient computation of normal modes for horizontal stratified models
Wu, Bo; Chen, Xiaofei
2016-08-01
We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.
Energy Technology Data Exchange (ETDEWEB)
Langer, Christoph; Lutz, M.; Kuehl, C.; Frey, N. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany); Partner Site Hamburg/Kiel/Luebeck, DZHK (German Centre for Cardiovascular Research), Kiel (Germany); Both, M.; Sattler, B.; Jansen, O; Schaefer, P. [Christian-Albrechts-Universitaet Kiel, Department of Diagnostic Radiology, University Medical Center Schleswig-Holstein (Germany); Harders, H.; Eden, M. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany)
2014-10-15
Late enhancement (LE) multi-slice computed tomography (leMDCT) was introduced for the visualization of (intra-) myocardial fibrosis in Hypertrophic Cardiomyopathy (HCM). LE is associated with adverse cardiac events. This analysis focuses on leMDCT derived LV muscle mass (LV-MM) which may be related to LE resulting in LE proportion for potential risk stratification in HCM. N=26 HCM-patients underwent leMDCT (64-slice-CT) and cardiovascular magnetic resonance (CMR). In leMDCT iodine contrast (Iopromid, 350 mg/mL; 150mL) was injected 7 minutes before imaging. Reconstructed short cardiac axis views served for planimetry. The study group was divided into three groups of varying LV-contrast. LeMDCT was correlated with CMR. The mean age was 64.2 ± 14 years. The groups of varying contrast differed in weight and body mass index (p < 0.05). In the group with good LV-contrast assessment of LV-MM resulted in 147.4 ± 64.8 g in leMDCT vs. 147.1 ± 65.9 in CMR (p > 0.05). In the group with sufficient contrast LV-MM appeared with 172 ± 30.8 g in leMDCT vs. 165.9 ± 37.8 in CMR (p > 0.05). Overall intra-/inter-observer variability of semiautomatic assessment of LV-MM showed an accuracy of 0.9 ± 8.6 g and 0.8 ± 9.2 g in leMDCT. All leMDCT-measures correlated well with CMR (r > 0.9). LeMDCT primarily performed for LE-visualization in HCM allows for accurate LV-volumetry including LV-MM in > 90 % of the cases. (orig.)
Accurate potential energy surfaces with a DFT+U(R) approach.
Kulik, Heather J; Marzari, Nicola
2011-11-21
We introduce an improvement to the Hubbard U augmented density functional approach known as DFT+U that incorporates variations in the value of self-consistently calculated, linear-response U with changes in geometry. This approach overcomes the one major shortcoming of previous DFT+U studies, i.e., the use of an averaged Hubbard U when comparing energies for different points along a potential energy surface is no longer required. While DFT+U is quite successful at providing accurate descriptions of localized electrons (e.g., d or f) by correcting self-interaction errors of standard exchange correlation functionals, we show several diatomic molecule examples where this position-dependent DFT+U(R) provides a significant two- to four-fold improvement over DFT+U predictions, when compared to accurate correlated quantum chemistry and experimental references. DFT+U(R) reduces errors in binding energies, frequencies, and equilibrium bond lengths by applying the linear-response, position-dependent U(R) at each configuration considered. This extension is most relevant where variations in U are large across the points being compared, as is the case with covalent diatomic molecules such as transition-metal oxides. We thus provide a tool for deciding whether a standard DFT+U approach is sufficient by determining the strength of the dependence of U on changes in coordinates. We also apply this approach to larger systems with greater degrees of freedom and demonstrate how DFT+U(R) may be applied automatically in relaxations, transition-state finding methods, and dynamics.
Computer networking a top-down approach
Kurose, James
2017-01-01
Unique among computer networking texts, the Seventh Edition of the popular Computer Networking: A Top Down Approach builds on the author’s long tradition of teaching this complex subject through a layered approach in a “top-down manner.” The text works its way from the application layer down toward the physical layer, motivating readers by exposing them to important concepts early in their study of networking. Focusing on the Internet and the fundamentally important issues of networking, this text provides an excellent foundation for readers interested in computer science and electrical engineering, without requiring extensive knowledge of programming or mathematics. The Seventh Edition has been updated to reflect the most important and exciting recent advances in networking.
Unilateral hyperlucency of the lung: a systematic approach to accurate radiographic interpretation
Energy Technology Data Exchange (ETDEWEB)
Noh, Hyung Jun; Oh, Yu Whan; Choi, Eun Jung; Seo, Bo Kyung; Cho, Kyu Ran; Kang, Eun Young; Kim, Jung Hyuk [Korea University College of Medicine, Seoul (Korea, Republic of)
2002-12-01
The radiographic appearance of a unilateral hyperlucent lung is related to various conditions, the accurate radiographic interpretation of which requires a structured approach as well as an awareness of the spectrum of these entities. Firstly, it is important to determine whether a hyperlucent hemithorax is associated with artifacts resulting from rotation of the patient, grid cutoff, or the heel effect. The second step is to determine whether or not a hyperlucent lung is abnormal. Lung that is in fact normal may appear hyperlucent because of diffusely increased opacity of the opposite hemithorax. Thirdly, thoracic wall and soft tissue abnormalities such as mastectomy of Poland syndrome may cause hyperinflation. Lastly, abnormalities of lung parenchyma may result in hyperlucency. Lung abnormalities and be divided into two groups: a) obstructive or compensatory hyperinflation; and b) reduced vascular perfusion of the lung due to congenital or acquired vascular abnormalities. In this article, we describe and illustrate the imaging spectrum of these causes and outline a structured approach to accurate radiographic interpretation.
Hybrid soft computing approaches research and applications
Dutta, Paramartha; Chakraborty, Susanta
2016-01-01
The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis, (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.
A Highly Accurate and Efficient Analytical Approach to Bridge Deck Free Vibration Analysis
Directory of Open Access Journals (Sweden)
D.J. Gorman
2000-01-01
Full Text Available The superposition method is employed to obtain an accurate analytical type solution for the free vibration frequencies and mode shapes of multi-span bridge decks. Free edge conditions are imposed on the long edges running in the direction of the deck. Inter-span support is of the simple (knife-edge type. The analysis is valid regardless of the number of spans or their individual lengths. Exact agreement is found when computed results are compared with known eigenvalues for bridge decks with all spans of equal length. Mode shapes and eigenvalues are presented for typical bridge decks of three and four span lengths. In each case torsional and non-torsional modes are studied.
Directory of Open Access Journals (Sweden)
Saumya Tiwari
Full Text Available Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients' biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures.
Annecchione, Maria; Hatch, David; Hefford, Shane W.
2017-01-01
In this paper we investigate digital elevation model (DEM) sourcing requirements to compute gravity gradiometry terrain corrections accurate to 1 Eötvös (Eö) at observation heights of 80 m or more above ground. Such survey heights are typical in fixed-wing airborne surveying for resource exploration where the maximum signal-to-noise ratio is sought. We consider the accuracy of terrain corrections relevant for recent commercial airborne gravity gradiometry systems operating at the 10 Eö noise level and for future systems with a target noise level of 1 Eö. We focus on the requirements for the vertical gradient of the vertical component of gravity (Gdd) because this element of the gradient tensor is most commonly interpreted qualitatively and quantitatively. Terrain correction accuracy depends on the bare-earth DEM accuracy and spatial resolution. The bare-earth DEM accuracy and spatial resolution depends on its source. Two possible sources are considered: airborne LiDAR and Shuttle Radar Topography Mission (SRTM). The accuracy of an SRTM DEM is affected by vegetation height. The SRTM footprint is also larger and the DEM resolution is thus lower. However, resolution requirements relax as relief decreases. Publicly available LiDAR data and 1 arc-second and 3 arc-second SRTM data were selected over four study areas representing end member cases of vegetation cover and relief. The four study areas are presented as reference material for processing airborne gravity gradiometry data at the 1 Eö noise level with 50 m spatial resolution. From this investigation we find that to achieve 1 Eö accuracy in the terrain correction at 80 m height airborne LiDAR data are required even when terrain relief is a few tens of meters and the vegetation is sparse. However, as satellite ranging technologies progress bare-earth DEMs of sufficient accuracy and resolution may be sourced at lesser cost. We found that a bare-earth DEM of 10 m resolution and 2 m accuracy are sufficient for
Handbook of computational approaches to counterterrorism
Subrahmanian, VS
2012-01-01
Terrorist groups throughout the world have been studied primarily through the use of social science methods. However, major advances in IT during the past decade have led to significant new ways of studying terrorist groups, making forecasts, learning models of their behaviour, and shaping policies about their behaviour. Handbook of Computational Approaches to Counterterrorism provides the first in-depth look at how advanced mathematics and modern computing technology is shaping the study of terrorist groups. This book includes contributions from world experts in the field, and presents extens
Boelens, O.J.; Laban, M.; Beek, van C.M.; Leeden, van der R.
2001-01-01
In this report the contribution of the National Aerospace Laboratory NLR to the ’CFD Drag Pre- diction Workshop’ organized by the AIAA in Anaheim, CA, on June 9-10, 2001, is presented. This contribution consists of both the results of all test cases and a discussion on the accurate computation of dr
Efficient and Accurate Computational Framework for Injector Design and Analysis Project
National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...
Ustinov, E A
2014-10-01
Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.
Ambikasaran, Sivaram
2015-01-01
Using accurate multi-component diffusion treatment in numerical combustion studies remains formidable due to the computational cost associated with solving for diffusion velocities. To obtain the diffusion velocities, for low density gases, one needs to solve the Stefan-Maxwell equations along with the zero diffusion flux criteria, which scales as $\\mathcal{O}(N^3)$, when solved exactly. In this article, we propose an accurate, fast, direct and robust algorithm to compute multi-component diffusion velocities. To our knowledge, this is the first provably accurate algorithm (the solution can be obtained up to an arbitrary degree of precision) scaling at a computational complexity of $\\mathcal{O}(N)$ in finite precision. The key idea involves leveraging the fact that the matrix of the reciprocal of the binary diffusivities, $V$, is low rank, with its rank being independent of the number of species involved. The low rank representation of matrix $V$ is computed in a fast manner at a computational complexity of $\\...
Novel computational approaches characterizing knee physiotherapy
Wangdo Kim; Veloso, Antonio P; Duarte Araujo; Kohles, Sean S.
2014-01-01
A knee joint’s longevity depends on the proper integration of structural components in an axial alignment. If just one of the components is abnormally off-axis, the biomechanical system fails, resulting in arthritis. The complexity of various failures in the knee joint has led orthopedic surgeons to select total knee replacement as a primary treatment. In many cases, this means sacrificing much of an otherwise normal joint. Here, we review novel computational approaches to describe knee physi...
Advanced computational approaches to biomedical engineering
Saha, Punam K; Basu, Subhadip
2014-01-01
There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig
Computational Approaches to Nucleic Acid Origami.
Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo
2015-10-12
Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.
A novel PCR-based approach for accurate identification of Vibrio parahaemolyticus
Directory of Open Access Journals (Sweden)
Ruichao eLi
2016-01-01
Full Text Available A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the blaCARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Phylogenetic analysis showed that blaCARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this blaCARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh, toxR and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of blaCARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by blaCARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification.
A Novel PCR-Based Approach for Accurate Identification of Vibrio parahaemolyticus.
Li, Ruichao; Chiou, Jiachi; Chan, Edward Wai-Chi; Chen, Sheng
2016-01-01
A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the bla CARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that bla CARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this bla CARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of bla CARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by bla CARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification.
Esque, Jeremy; Cecchini, Marco
2015-04-23
The calculation of the free energy of conformation is key to understanding the function of biomolecules and has attracted significant interest in recent years. Here, we present an improvement of the confinement method that was designed for use in the context of explicit solvent MD simulations. The development involves an additional step in which the solvation free energy of the harmonically restrained conformers is accurately determined by multistage free energy perturbation simulations. As a test-case application, the newly introduced confinement/solvation free energy (CSF) approach was used to compute differences in free energy between conformers of the alanine dipeptide in explicit water. The results are in excellent agreement with reference calculations based on both converged molecular dynamics and umbrella sampling. To illustrate the general applicability of the method, conformational equilibria of met-enkephalin (5 aa) and deca-alanine (10 aa) in solution were also analyzed. In both cases, smoothly converged free-energy results were obtained in agreement with equilibrium sampling or literature calculations. These results demonstrate that the CSF method may provide conformational free-energy differences of biomolecules with small statistical errors (below 0.5 kcal/mol) and at a moderate computational cost even with a full representation of the solvent.
Sun, Y Y; Kim, Yong-Hyun; Lee, Kyuho; Zhang, S B
2008-10-21
Density functional theory (DFT) in the commonly used local density or generalized gradient approximation fails to describe van der Waals (vdW) interactions that are vital to organic, biological, and other molecular systems. Here, we propose a simple, efficient, yet accurate local atomic potential (LAP) approach, named DFT+LAP, for including vdW interactions in the framework of DFT. The LAPs for H, C, N, and O are generated by fitting the DFT+LAP potential energy curves of small molecule dimers to those obtained from coupled cluster calculations with single, double, and perturbatively treated triple excitations, CCSD(T). Excellent transferability of the LAPs is demonstrated by remarkable agreement with the JSCH-2005 benchmark database [P. Jurecka et al. Phys. Chem. Chem. Phys. 8, 1985 (2006)], which provides the interaction energies of CCSD(T) quality for 165 vdW and hydrogen-bonded complexes. For over 100 vdW dominant complexes in this database, our DFT+LAP calculations give a mean absolute deviation from the benchmark results less than 0.5 kcal/mol. The DFT+LAP approach involves no extra computational cost other than standard DFT calculations and no modification of existing DFT codes, which enables straightforward quantum simulations, such as ab initio molecular dynamics, on biomolecular systems, as well as on other organic systems.
Computer Forensics Education - the Open Source Approach
Huebner, Ewa; Bem, Derek; Cheung, Hon
In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.
Interacting electrons theory and computational approaches
Martin, Richard M; Ceperley, David M
2016-01-01
Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. Many-body methods are becoming essential tools vital for quantitative calculations and understanding materials phenomena in physics, chemistry, materials science and other fields. This book provides a unified exposition of the most-used tools: many-body perturbation theory, dynamical mean field theory and quantum Monte Carlo simulations. Each topic is introduced with a less technical overview for a broad readership, followed by in-depth descriptions and mathematical formulation. Practical guidelines, illustrations and exercises are chosen to enable readers to appreciate the complementary approaches, their relationships, and the advantages and disadvantages of each method. This book is designed for graduate students and researchers who want to use and understand these advanced computational tools, get a broad overview, and acquire a basis for participating in new developments.
Computational approaches to analogical reasoning current trends
Richard, Gilles
2014-01-01
Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...
A computational language approach to modeling prose recall in schizophrenia.
Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita
2014-06-01
Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.
Huré, J.-M.; Hersant, F.
2017-02-01
We compute the structure of a self-gravitating torus with polytropic equation of state (EOS) rotating in an imposed centrifugal potential. The Poisson solver is based on isotropic multigrid with optimal covering factor (fluid section-to-grid area ratio). We work at second order in the grid resolution for both finite difference and quadrature schemes. For soft EOS (i.e. polytropic index n ≥ 1), the underlying second order is naturally recovered for boundary values and any other integrated quantity sensitive to the mass density (mass, angular momentum, volume, virial parameter, etc.), i.e. errors vary with the number N of nodes per direction as ˜1/N2. This is, however, not observed for purely geometrical quantities (surface area, meridional section area, volume), unless a subgrid approach is considered (i.e. boundary detection). Equilibrium sequences are also much better described, especially close to critical rotation. Yet another technical effort is required for hard EOS (n < 1), due to infinite mass density gradients at the fluid surface. We fix the problem by using kernel splitting. Finally, we propose an accelerated version of the self-consistent field (SCF) algorithm based on a node-by-node pre-conditioning of the mass density at each step. The computing time is reduced by a factor of 2 typically, regardless of the polytropic index. There is a priori no obstacle to applying these results and techniques to ellipsoidal configurations and even to 3D configurations.
Ng, C. N.; Chu, T. P.; Wu, Huasheng; Tong, S. Y.; Huang, Hong
1997-03-01
We compare multiple scattering results of angle-resolved photoelectron diffraction spectra between the exact slab method and the separable propagator perturbation method. In the slab method,footnote C.H. Li, A.R. Lubinsky and S.Y. Tong, Phys. Rev. B17, 3128 (1978). the source wave and multiple scattering within the strong scattering atomic layers are expanded in spherical waves while interlayer scattering is expressed in plane waves. The transformation between spherical waves and plane waves is done exactly. The plane waves are then matched across the solid-vacuum interface to a single outgoing plane wave in the detector's direction. The separable propagator perturbation approach uses two approximations: (i) A separable representation of the Green's function propagator and (ii) A perturbation expansion of multiple scattering terms. Results of c(2x2) S-Ni(001) show that this approximate method fails to converge due to the very slow convergence of the separable representation for scattering angles less than 90^circ. However, this method is accurate in the backscattering regime and may be applied to XAFS calculations.(J.J. Rehr and R.C. Albers, Phys. Rev. B41, 8139 (1990).) The use of this method for angle-resolved photoelectron diffraction spectra is substantially less reliable.
Institute of Scientific and Technical Information of China (English)
Chaoying Bai; Rui Zhao; Stewart Greenhalgh
2009-01-01
A novel hybrid approach for earthquake location is proposed which uses a combined coarse global search and fine local inversion with a minimum search routine, plus an examination of the root mean squares (RMS) error distribution. The method exploits the advantages of network ray tracing and robust formulation of the Frechet derivatives to simultaneously update all possible initial source parameters around most local minima (including the global minimum) in the solution space, and finally to determine the likely global solution. Several synthetic examples involving a 3-D complex velocity model and a challenging source-receiver layout are used to demonstrate the capability of the newly-developed method. This new global-local hybrid solution technique not only incorporates the significant benefits of our recently published hypocenter determination procedure for multiple earthquake parameters, but also offers the attractive features of global optimal searching in the RMS travel time error distribution. Unlike the traditional global search method, for example, the Monte Carlo approach, where millions of tests have to be done to find the final global solution, the new method only conducts a matrix inversion type local search but does it multiple times simultaneously throughout the model volume to seek a global solution. The search is aided by inspection of the RMS error distribution. Benchmark tests against two popular approaches, the direct grid search method and the oct-tree important sampling method, indicate that the hybrid global-local inversion yields comparable location accuracy and is not sensitive to modest level of noise data, but more importantly it offers two-order of magnitude speed-up in computational effort. Such an improvement, combined with high accuracy, make it a promising hypocenter determination scheme in earthquake early warning, tsunami early warning, rapid hazard assessment and emergency response after strong earthquake occurrence.
Krokhotin, Andrey; Dokholyan, Nikolay V
2015-01-01
Computational methods can provide significant insights into RNA structure and dynamics, bridging the gap in our understanding of the relationship between structure and biological function. Simulations enrich and enhance our understanding of data derived on the bench, as well as provide feasible alternatives to costly or technically challenging experiments. Coarse-grained computational models of RNA are especially important in this regard, as they allow analysis of events occurring in timescales relevant to RNA biological function, which are inaccessible through experimental methods alone. We have developed a three-bead coarse-grained model of RNA for discrete molecular dynamics simulations. This model is efficient in de novo prediction of short RNA tertiary structure, starting from RNA primary sequences of less than 50 nucleotides. To complement this model, we have incorporated additional base-pairing constraints and have developed a bias potential reliant on data obtained from hydroxyl probing experiments that guide RNA folding to its correct state. By introducing experimentally derived constraints to our computer simulations, we are able to make reliable predictions of RNA tertiary structures up to a few hundred nucleotides. Our refined model exemplifies a valuable benefit achieved through integration of computation and experimental methods.
Solubility of nonelectrolytes: a first-principles computational approach.
Jackson, Nicholas E; Chen, Lin X; Ratner, Mark A
2014-05-15
Using a combination of classical molecular dynamics and symmetry adapted intermolecular perturbation theory, we develop a high-accuracy computational method for examining the solubility energetics of nonelectrolytes. This approach is used to accurately compute the cohesive energy density and Hildebrand solubility parameters of 26 molecular liquids. The energy decomposition of symmetry adapted perturbation theory is then utilized to develop multicomponent Hansen-like solubility parameters. These parameters are shown to reproduce the solvent categorizations (nonpolar, polar aprotic, or polar protic) of all molecular liquids studied while lending quantitative rigor to these qualitative categorizations via the introduction of simple, easily computable parameters. Notably, we find that by monitoring the first-order exchange energy contribution to the total interaction energy, one can rigorously determine the hydrogen bonding character of a molecular liquid. Finally, this method is applied to compute explicitly the Flory interaction parameter and the free energy of mixing for two different small molecule mixtures, reproducing the known miscibilities. This methodology represents an important step toward the prediction of molecular solubility from first principles.
Accurate computation of Galerkin double surface integrals in the 3-D boundary element method
Adelman, Ross; Duraiswami, Ramani
2015-01-01
Many boundary element integral equation kernels are based on the Green's functions of the Laplace and Helmholtz equations in three dimensions. These include, for example, the Laplace, Helmholtz, elasticity, Stokes, and Maxwell's equations. Integral equation formulations lead to more compact, but dense linear systems. These dense systems are often solved iteratively via Krylov subspace methods, which may be accelerated via the fast multipole method. There are advantages to Galerkin formulations for such integral equations, as they treat problems associated with kernel singularity, and lead to symmetric and better conditioned matrices. However, the Galerkin method requires each entry in the system matrix to be created via the computation of a double surface integral over one or more pairs of triangles. There are a number of semi-analytical methods to treat these integrals, which all have some issues, and are discussed in this paper. We present novel methods to compute all the integrals that arise in Galerkin fo...
Accurate computation of surface stresses and forces with immersed boundary methods
Goza, Andres; Morley, Benjamin; Colonius, Tim
2016-01-01
Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is a...
Necessary conditions for accurate computations of three-body partial decay widths
Garrido, E; Fedorov, D V
2008-01-01
The partial width for decay of a resonance into three fragments is largely determined at distances where the energy is smaller than the effective potential producing the corresponding wave function. At short distances the many-body properties are accounted for by preformation or spectroscopic factors. We use the adiabatic expansion method combined with the WKB approximation to obtain the indispensable cluster model wave functions at intermediate and larger distances. We test the concept by deriving conditions for the minimal basis expressed in terms of partial waves and radial nodes. We compare results for different effective interactions and methods. Agreement is found with experimental values for a sufficiently large basis. We illustrate the ideas with realistic examples from $\\alpha$-emission of $^{12}$C and two-proton emission of $^{17}$Ne. Basis requirements for accurate momentum distributions are briefly discussed.
Necessary conditions for accurate computations of three-body partial decay widths
Garrido, E.; Jensen, A. S.; Fedorov, D. V.
2008-09-01
The partial width for decay of a resonance into three fragments is largely determined at distances where the energy is smaller than the effective potential producing the corresponding wave function. At short distances the many-body properties are accounted for by preformation or spectroscopic factors. We use the adiabatic expansion method combined with the WKB approximation to obtain the indispensable cluster model wave functions at intermediate and larger distances. We test the concept by deriving conditions for the minimal basis expressed in terms of partial waves and radial nodes. We compare results for different effective interactions and methods. Agreement is found with experimental values for a sufficiently large basis. We illustrate the ideas with realistic examples from α emission of C12 and two-proton emission of Ne17. Basis requirements for accurate momentum distributions are briefly discussed.
Energy Technology Data Exchange (ETDEWEB)
Osei-Kuffuor, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fattebert, Jean-Luc [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-01-01
We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7x10^{-4} Ha/Bohr.
Osei-Kuffuor, Daniel; Fattebert, Jean-Luc
2014-01-31
We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101,952 atoms on 23,328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7×10(-4) Ha/Bohr.
Novel computational approaches characterizing knee physiotherapy
Directory of Open Access Journals (Sweden)
Wangdo Kim
2014-01-01
Full Text Available A knee joint’s longevity depends on the proper integration of structural components in an axial alignment. If just one of the components is abnormally off-axis, the biomechanical system fails, resulting in arthritis. The complexity of various failures in the knee joint has led orthopedic surgeons to select total knee replacement as a primary treatment. In many cases, this means sacrificing much of an otherwise normal joint. Here, we review novel computational approaches to describe knee physiotherapy by introducing a new dimension of foot loading to the knee axis alignment producing an improved functional status of the patient. New physiotherapeutic applications are then possible by aligning foot loading with the functional axis of the knee joint during the treatment of patients with osteoarthritis.
Music Genre Classification Systems - A Computational Approach
DEFF Research Database (Denmark)
Ahrendt, Peter
2006-01-01
Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought...... that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular systems which use the raw audio signal as input to estimate the corresponding genre. This is in contrast...... to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
A recursive algorithm is adopted for the computation of dyadic Green's functions in three-dimensional stratified uniaxial anisotropic media with arbitrary number of layers. Three linear equation groups for computing the coefficients of the Sommerfeld integrals are obtained according to the continuity condition of electric and magnetic fields across the interface between different layers, which are in correspondence with the TM wave produced by a vertical unit electric dipole and the TE or TM wave produced by a horizontal unit electric dipole, respectively. All the linear equation groups can be solved via the recursive algorithm. The dyadic Green's functions with source point and field point being in any layer can be conveniently obtained by merely changing the position of the elements within the source term of the linear equation groups. The problem of singularities occurring in the Sommerfeld integrals is efficiently solved by deforming the integration path in the complex plane. The expression of the dyadic Green's functions provided by this paper is terse in form and is easy to be programmed, and it does not overflow. Theoretical analysis and numerical examples show the accuracy and effectivity of the algorithm.
A computational approach to negative priming
Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael
2007-09-01
Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).
Fast accurate computation of the fully nonlinear solitary surface gravity waves
Clamond, Didier
2013-01-01
In this short note, we present an easy to implement and fast algorithm for the computation of the steady solitary gravity wave solution of the free surface Euler equations in irrotational motion. First, the problem is reformulated in a fixed domain using the conformal mapping technique. Second, the problem is reduced to a single equation for the free surface. Third, this equation is solved using Petviashvili's iterations together with pseudo-spectral discretisation. This method has a super-linear complexity, since the most demanding operations can be performed using a FFT algorithm. Moreover, when this algorithm is combined with the multi-precision arithmetics, the results can be obtained to any arbitrary accuracy.
A fast and accurate method to compute the mass return from multiple stellar populations
Calura, F; Nipoti, C
2013-01-01
The mass returned to the ambient medium by aging stellar populations over cosmological times sums up to a significant fraction (20% - 30% or more) of their initial mass. This continuous mass injection plays a fundamental role in phenomena such as galaxy formation and evolution, fueling of supermassive black holes in galaxies and the consequent (negative and positive) feedback phenomena, and the origin of multiple stellar populations in globular clusters. In numerical simulations the calculation of the mass return can be time consuming, since it requires at each time step the evaluation of a convolution integral over the whole star formation history, so the computational time increases quadratically with the number of time-steps. The situation can be especially critical in hydrodynamical simulations, where different grid points are characterized by different star formation histories, and the gas cooling and heating times are shorter by orders of magnitude than the characteristic stellar lifetimes. In this pape...
Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)
2008-01-01
A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.
Accurate Computed Enthalpies of Spin Crossover in Iron and Cobalt Complexes
DEFF Research Database (Denmark)
Kepp, Kasper Planeta; Cirera, J
2009-01-01
Despite their importance in many chemical processes, the relative energies of spin states of transition metal complexes have so far been haunted by large computational errors. By the use of six functionals, B3LYP, BP86, TPSS, TPSSh, M06L, and M06L, this work studies nine complexes (seven with iron...... effects of first-row transition metal systems. Furthermore, it is shown that given an experimental structure of an iron complex, TPSSh can predict the electronic state corresponding to that experimental structure. We recommend this functional as current state-of-the-art for studying spin crossover...... and two with cobalt) for which experimental enthalpies of spin crossover are available. It is shown that such enthalpies can be used as quantitative benchmarks of a functional's ability to balance electron correlation in both the involved states. TPSSh achieves an unprecedented mean absolute error...
Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.
Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola
2016-01-01
Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non
Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.
Directory of Open Access Journals (Sweden)
Shanis Barnard
Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is
Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour
Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola
2016-01-01
Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non
Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters
Strutzenberg, Louise L.; Williams, Brandon R.
2011-01-01
Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.
Blueprinting Approach in Support of Cloud Computing
Directory of Open Access Journals (Sweden)
Willem-Jan van den Heuvel
2012-03-01
Full Text Available Current cloud service offerings, i.e., Software-as-a-service (SaaS, Platform-as-a-service (PaaS and Infrastructure-as-a-service (IaaS offerings are often provided as monolithic, one-size-fits-all solutions and give little or no room for customization. This limits the ability of Service-based Application (SBA developers to configure and syndicate offerings from multiple SaaS, PaaS, and IaaS providers to address their application requirements. Furthermore, combining different independent cloud services necessitates a uniform description format that facilitates the design, customization, and composition. Cloud Blueprinting is a novel approach that allows SBA developers to easily design, configure and deploy virtual SBA payloads on virtual machines and resource pools on the cloud. We propose the Blueprint concept as a uniform abstract description for cloud service offerings that may cross different cloud computing layers, i.e., SaaS, PaaS and IaaS. To support developers with the SBA design and development in the cloud, this paper introduces a formal Blueprint Template for unambiguously describing a blueprint, as well as a Blueprint Lifecycle that guides developers through the manipulation, composition and deployment of different blueprints for an SBA. Finally, the empirical evaluation of the blueprinting approach within an EC’s FP7 project is reported and an associated blueprint prototype implementation is presented.
Analysis of computational models for an accurate study of electronic excitations in GFP
DEFF Research Database (Denmark)
Schwabe, Tobias; Beerepoot, Maarten; Olsen, Jógvan Magnus Haugaard
2015-01-01
Using the chromophore of the green fluorescent protein (GFP), the performance of a hybrid RI-CC2 / polarizable embedding (PE) model is tested against a quantum chemical cluster pproach. Moreover, the effect of the rest of the protein environment is studied by systematically increasing the size...... that the treatment of only a small region around the chromophore is only by coincidence a good approximation. Therefore, such cluster approaches should be used with care. Based on our results, we suggest that polarizable embedding models, including a large part of the environment to describe its effect...
Highly Accurate Frequency Calculations of Crab Cavities Using the VORPAL Computational Framework
Energy Technology Data Exchange (ETDEWEB)
Austin, T.M.; /Tech-X, Boulder; Cary, J.R.; /Tech-X, Boulder /Colorado U.; Bellantoni, L.; /Argonne
2009-05-01
We have applied the Werner-Cary method [J. Comp. Phys. 227, 5200-5214 (2008)] for extracting modes and mode frequencies from time-domain simulations of crab cavities, as are needed for the ILC and the beam delivery system of the LHC. This method for frequency extraction relies on a small number of simulations, and post-processing using the SVD algorithm with Tikhonov regularization. The time-domain simulations were carried out using the VORPAL computational framework, which is based on the eminently scalable finite-difference time-domain algorithm. A validation study was performed on an aluminum model of the 3.9 GHz RF separators built originally at Fermi National Accelerator Laboratory in the US. Comparisons with measurements of the A15 cavity show that this method can provide accuracy to within 0.01% of experimental results after accounting for manufacturing imperfections. To capture the near degeneracies two simulations, requiring in total a few hours on 600 processors were employed. This method has applications across many areas including obtaining MHD spectra from time-domain simulations.
Directory of Open Access Journals (Sweden)
Yuqing He
2014-01-01
Full Text Available Autonomous maneuvering flight control of rotor-flying robots (RFR is a challenging problem due to the highly complicated structure of its model and significant uncertainties regarding many aspects of the field. As a consequence, it is difficult in many cases to decide whether or not a flight maneuver trajectory is feasible. It is necessary to conduct an analysis of the flight maneuvering ability of an RFR prior to test flight. Our aim in this paper is to use a numerical method called algorithm differentiation (AD to solve this problem. The basic idea is to compute the internal state (i.e., attitude angles and angular rates and input profiles based on predetermined maneuvering trajectory information denoted by the outputs (i.e., positions and yaw angle and their higher-order derivatives. For this purpose, we first present a model of the RFR system and show that it is flat. We then cast the procedure for obtaining the required state/input based on the desired outputs as a static optimization problem, which is solved using AD and a derivative based optimization algorithm. Finally, we test our proposed method using a flight maneuver trajectory to verify its performance.
Efficient and accurate computation of electric field dyadic Green's function in layered media
Cho, Min Hyung
2016-01-01
Concise and explicit formulas for dyadic Green's functions, representing the electric and magnetic fields due to a dipole source placed in layered media, are derived in this paper. First, the electric and magnetic fields in the spectral domain for the half space are expressed using Fresnel reflection and transmission coefficients. Each component of electric field in the spectral domain constitutes the spectral Green's function in layered media. The Green's function in the spatial domain is then recovered involving Sommerfeld integrals for each component in the spectral domain. By using Bessel identities, the number of Sommerfeld integrals are reduced, resulting in much simpler and more efficient formulas for numerical implementation compared with previous results. This approach is extended to the three-layer Green's function. In addition, the singular part of the Green's function is naturally separated out so that integral equation methods developed for free space Green's functions can be used with minimal mo...
Li, Xin; Han, Xingpeng; Sun, Wei; Wang, Meng; Jing, Guohui
2016-01-01
Background To evaluate the role of computed tomography (CT) in preoperative diagnosis of intrathymic cyst and small thymoma, and determine the best CT threshold for distinguish intrathymic cyst from small thymoma. Methods We retrospectively reviewed the medical records of 30 patients (17 intrathymic cyst and 13 small thymoma) who had undergone mediastinal masses resection (with diameter less than 3 cm) under thoracoscope between January 2014 and July 2015 at our hospital. Clinical and CT features were compared and receiver-operating characteristics curve (ROC) analysis was performed. Results The CT value of small thymoma [39.5 HU (IQR, 33.7–42.2 HU)] was significantly higher than intrathymic cyst [25.8 HU (IQR, 22.3–29.3 HU), P=0.004]. When CT value was 31.2 HU, it could act as a threshold for identification of small thymoma and intrathymic cyst (the sensitivity and specificity was 92.3% and 82.4%, respectively). The ΔCT value of enhanced CT value with the non-enhanced CT value was significantly different between small thymoma [18.7 HU (IQR, 10.9–19.0 HU)] and intrathymic cyst [4.3 HU (IQR, 3.0–11.7 HU), P=0.04]. The density was more homogenous in intrathymic cyst than small thymoma, and the contour of the intrathymic cyst was more smoothly than small thymoma. Conclusions Preoperative CT scans could help clinicians to identify intrathymic cyst and small thymoma, and we recommend 31.2 HU as the best thresholds. Contrast-enhanced CT scans is useful for further identification of the two diseases. PMID:27621863
Directory of Open Access Journals (Sweden)
Hrubý Jan
2012-04-01
Full Text Available Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.
Chavanon, O; Barbe, C; Troccaz, J; Carrat, L; Ribuot, C; Noirclerc, M; Maitrasse, B; Blin, D
1999-06-01
In the field of percutaneous access to soft tissues, our project was to improve classical pericardiocentesis by performing accurate guidance to a selected target, according to a model of the pericardial effusion acquired through three-dimensional (3D) data recording. Required hardware is an echocardiographic device and a needle, both linked to a 3D localizer, and a computer. After acquiring echographic data, a modeling procedure allows definition of the optimal puncture strategy, taking into consideration the mobility of the heart, by determining a stable region, whatever the period of the cardiac cycle. A passive guidance system is then used to reach the planned target accurately, generally a site in the middle of the stable region. After validation on a dynamic phantom and a feasibility study in dogs, an accuracy and reliability analysis protocol was realized on pigs with experimental pericardial effusion. Ten consecutive successful punctures using various trajectories were performed on eight pigs. Nonbloody liquid was collected from pericardial effusions in the stable region (5 to 9 mm wide) within 10 to 15 minutes from echographic acquisition to drainage. Accuracy of at least 2.5 mm was demonstrated. This study demonstrates the feasibility of computer-assisted pericardiocentesis. Beyond the simple improvement of the current technique, this method could be a new way to reach the heart or a new tool for percutaneous access and image-guided puncture of soft tissues. Further investigation will be necessary before routine human application.
Yi, Sha-Sha; Pan, Cong; Hu, Zhong-Han
2015-12-01
Modern computer simulations of biological systems often involve an explicit treatment of the complex interactions among a large number of molecules. While it is straightforward to compute the short-ranged Van der Waals interaction in classical molecular dynamics simulations, it has been a long-lasting issue to develop accurate methods for the longranged Coulomb interaction. In this short review, we discuss three types of methodologies for the accurate treatment of electrostatics in simulations of explicit molecules: truncation-type methods, Ewald-type methods, and mean-field-type methods. Throughout the discussion, we brief the formulations and developments of these methods, emphasize the intrinsic connections among the three types of methods, and focus on the existing problems which are often associated with the boundary conditions of electrostatics. This brief survey is summarized with a short perspective on future trends along the method developments and applications in the field of biological simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 91127015 and 21522304) and the Open Project from the State Key Laboratory of Theoretical Physics, and the Innovation Project from the State Key Laboratory of Supramolecular Structure and Materials.
Manz, Thomas A; Sholl, David S
2011-12-13
The partitioning of electron spin density among atoms in a material gives atomic spin moments (ASMs), which are important for understanding magnetic properties. We compare ASMs computed using different population analysis methods and introduce a method for computing density derived electrostatic and chemical (DDEC) ASMs. Bader and DDEC ASMs can be computed for periodic and nonperiodic materials with either collinear or noncollinear magnetism, while natural population analysis (NPA) ASMs can be computed for nonperiodic materials with collinear magnetism. Our results show Bader, DDEC, and (where applicable) NPA methods give similar ASMs, but different net atomic charges. Because they are optimized to reproduce both the magnetic field and the chemical states of atoms in a material, DDEC ASMs are especially suitable for constructing interaction potentials for atomistic simulations. We describe the computation of accurate ASMs for (a) a variety of systems using collinear and noncollinear spin DFT, (b) highly correlated materials (e.g., magnetite) using DFT+U, and (c) various spin states of ozone using coupled cluster expansions. The computed ASMs are in good agreement with available experimental results for a variety of periodic and nonperiodic materials. Examples considered include the antiferromagnetic metal organic framework Cu3(BTC)2, several ozone spin states, mono- and binuclear transition metal complexes, ferri- and ferro-magnetic solids (e.g., Fe3O4, Fe3Si), and simple molecular systems. We briefly discuss the theory of exchange-correlation functionals for studying noncollinear magnetism. A method for finding the ground state of systems with highly noncollinear magnetism is introduced. We use these methods to study the spin-orbit coupling potential energy surface of the single molecule magnet Fe4C40H52N4O12, which has highly noncollinear magnetism, and find that it contains unusual features that give a new interpretation to experimental data.
Yokogawa, Daisuke; Ono, Kohei; Sato, Hirofumi; Sakaki, Shigeyoshi
2011-11-14
The ligand exchange process of cis-platin in aqueous solution was studied using RISM-SCF-SEDD (reference interaction site model-self-consistent field with spatial electron density distribution) method, a hybrid approach of quantum chemistry and statistical mechanics. The analytical nature of RISM theory enables us to compute accurate reaction free energy in aqueous solution based on CCSD(T), together with the microscopic solvation structure around the complex. We found that the solvation effect is indispensable to promote the dissociation of the chloride anion from the complex.
Kory, Carol L.
1999-01-01
The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made
Ettore Taverna; Henri Ufenast; Laura Broffoni; Guido Garavaglia
2013-01-01
The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect...
A semantic-web approach for modeling computing infrastructures
M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat
2013-01-01
This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be
Energy Technology Data Exchange (ETDEWEB)
Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)
2015-01-01
The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.
Thornburg, Jonathan
2010-01-01
If a small "particle" of mass $\\mu M$ (with $\\mu \\ll 1$) orbits a Schwarzschild or Kerr black hole of mass $M$, the particle is subject to an $\\O(\\mu)$ radiation-reaction "self-force". Here I argue that it's valuable to compute this self-force highly accurately (relative error of $\\ltsim 10^{-6}$) and efficiently, and I describe techniques for doing this and for obtaining and validating error estimates for the computation. I use an adaptive-mesh-refinement (AMR) time-domain numerical integration of the perturbation equations in the Barack-Ori mode-sum regularization formalism; this is efficient, yet allows easy generalization to arbitrary particle orbits. I focus on the model problem of a scalar particle in a circular geodesic orbit in Schwarzschild spacetime. The mode-sum formalism gives the self-force as an infinite sum of regularized spherical-harmonic modes $\\sum_{\\ell=0}^\\infty F_{\\ell,\\reg}$, with $F_{\\ell,\\reg}$ (and an "internal" error estimate) computed numerically for $\\ell \\ltsim 30$ and estimated ...
Directory of Open Access Journals (Sweden)
Ettore Taverna
2013-01-01
Full Text Available The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.
Taverna, Ettore; Ufenast, Henri; Broffoni, Laura; Garavaglia, Guido
2013-07-01
The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.
Fast and Accurate Electronic Excitations in Cyanines with the Many-Body Bethe-Salpeter Approach.
Boulanger, Paul; Jacquemin, Denis; Duchemin, Ivan; Blase, Xavier
2014-03-11
The accurate prediction of the optical signatures of cyanine derivatives remains an important challenge in theoretical chemistry. Indeed, up to now, only the most expensive quantum chemical methods (CAS-PT2, CC, DMC, etc.) yield consistent and accurate data, impeding the applications on real-life molecules. Here, we investigate the lowest lying singlet excitation energies of increasingly long cyanine dyes within the GW and Bethe-Salpeter Green's function many-body perturbation theory. Our results are in remarkable agreement with available coupled-cluster (exCC3) data, bringing these two single-reference perturbation techniques within a 0.05 eV maximum discrepancy. By comparison, available TD-DFT calculations with various semilocal, global, or range-separated hybrid functionals, overshoot the transition energies by a typical error of 0.3-0.6 eV. The obtained accuracy is achieved with a parameter-free formalism that offers similar accuracy for metallic or insulating, finite size or extended systems.
Accurate Vehicle Location System Using RFID, an Internet of Things Approach.
Prinsloo, Jaco; Malekian, Reza
2016-06-04
Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.
Directory of Open Access Journals (Sweden)
Theodore D. Katsilieris
2017-03-01
Full Text Available The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using
COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING
Directory of Open Access Journals (Sweden)
Afonnikov D.
2012-08-01
Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.
Computer networks ISE a systems approach
Peterson, Larry L
2007-01-01
Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p
Human Computer Interaction: An intellectual approach
Directory of Open Access Journals (Sweden)
Kuntal Saroha
2011-08-01
Full Text Available This paper discusses the research that has been done in thefield of Human Computer Interaction (HCI relating tohuman psychology. Human-computer interaction (HCI isthe study of how people design, implement, and useinteractive computer systems and how computers affectindividuals, organizations, and society. This encompassesnot only ease of use but also new interaction techniques forsupporting user tasks, providing better access toinformation, and creating more powerful forms ofcommunication. It involves input and output devices andthe interaction techniques that use them; how information ispresented and requested; how the computer’s actions arecontrolled and monitored; all forms of help, documentation,and training; the tools used to design, build, test, andevaluate user interfaces; and the processes that developersfollow when creating Interfaces.
Computer science approach to quantum control
Energy Technology Data Exchange (ETDEWEB)
Janzing, D.
2006-07-01
Whereas it is obvious that every computation process is a physical process it has hardly been recognized that many complex physical processes bear similarities to computation processes. This is in particular true for the control of physical systems on the nanoscopic level: usually the system can only be accessed via a rather limited set of elementary control operations and for many purposes only a concatenation of a large number of these basic operations will implement the desired process. This concatenation is in many cases quite similar to building complex programs from elementary steps and principles for designing algorithm may thus be a paradigm for designing control processes. For instance, one can decrease the temperature of one part of a molecule by transferring its heat to the remaining part where it is then dissipated to the environment. But the implementation of such a process involves a complex sequence of electromagnetic pulses. This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. We show that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer. Likewise, the implementation of a heat engine on the nanoscale requires to process the heat in a way that is similar to information processing and it can be shown that heat engines with maximal efficiency would be powerful computers, too. In the same way as problems in computer science can be classified by complexity classes we can also classify control problems according to their complexity. Moreover, we directly relate these complexity classes for control problems to the classes in computer science. Unifying notions of complexity in computer science and physics has therefore two aspects: on the one hand, computer science methods help to analyze the complexity of physical processes. On the other hand, reasonable
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS
Directory of Open Access Journals (Sweden)
Zhibin Miao
2015-12-01
Full Text Available With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.
Computational dynamics for robotics systems using a non-strict computational approach
Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.
1989-01-01
A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.
Baiardi, Alberto; Bloino, Julien; Barone, Vincenzo
2015-07-14
The interpretation and analysis of experimental resonance-Raman (RR) spectra can be significantly facilitated by vibronic computations based on reliable quantum-mechanical (QM) methods. With the aim of improving the description of large and flexible molecules, our recent time-dependent formulation to compute vibrationally resolved electronic spectra, based on Cartesian coordinates, has been extended to support internal coordinates. A set of nonredundant delocalized coordinates is automatically generated from the molecular connectivity thanks to a new general and robust procedure. In order to validate our implementation, a series of molecules has been used as test cases. Among them, rigid systems show that normal modes based on Cartesian and delocalized internal coordinates provide equivalent results, but the latter set is much more convenient and reliable for systems characterized by strong geometric deformations associated with the electronic transition. The so-called Z-matrix internal coordinates, which perform well for chain molecules, are also shown to be poorly suited in the presence of cycles or nonstandard structures.
Accurate characterization of weak neutron fields by using a Bayesian approach.
Medkour Ishak-Boushaki, G; Allab, M
2017-04-01
A Bayesian analysis of data derived from neutron spectrometric measurements provides the advantage of determining rigorously integral physical quantities characterizing the neutron field and their respective related uncertainties. The first and essential step in a Bayesian approach is the parameterization of the investigated neutron spectrum. The aim of this paper is to investigate the sensitivity of the Bayesian results, mainly the neutron dose H(*)(10) required for radiation protection purposes and its correlated uncertainty, to the selected neutron spectrum parameterization.
Human brain mapping: Experimental and computational approaches
Energy Technology Data Exchange (ETDEWEB)
Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J. [Los Alamos National Lab., NM (US); Sanders, J. [Albuquerque VA Medical Center, NM (US); Belliveau, J. [Massachusetts General Hospital, Boston, MA (US)
1998-11-01
This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
Directory of Open Access Journals (Sweden)
A. D. Zarrabi
2010-12-01
Full Text Available PURPOSE: To design a simple, cost-effective system for gaining rapid and accurate calyceal access during percutaneous nephrolithotomy (PCNL. MATERIALS AND METHODS: The design consists of a low-cost, light-weight, portable mechanical gantry with a needle guiding device. Using C-arm fluoroscopy, two images of the contrast-filled renal collecting system are obtained: at 0-degrees (perpendicular to the kidney and 20-degrees. These images are relayed to a laptop computer containing the software and graphic user interface for selecting the targeted calyx. The software provides numerical settings for the 3 axes of the gantry, which are used to position the needle guiding device. The needle is advanced through the guide to the depth calculated by the software, thus puncturing the targeted calyx. Testing of the system was performed on 2 target types: 1 radiolucent plastic tubes the approximate size of a renal calyx (5 or 10 mm in diameter, 30 mm in length; and 2 foam-occluded, contrast-filled porcine kidneys. RESULTS: Tests using target type 1 with 10 mm diameter (n = 14 and 5 mm diameter (n = 7 tubes resulted in a 100% targeting success rate, with a mean procedure duration of 10 minutes. Tests using target type 2 (n = 2 were both successful, with accurate puncturing of the selected renal calyx, and a mean procedure duration of 15 minutes. CONCLUSIONS: The mechanical gantry system described in this paper is low-cost, portable, light-weight, and simple to set up and operate. C-arm fluoroscopy is limited to two images, thus reducing radiation exposure significantly. Testing of the system showed an extremely high degree of accuracy in gaining precise access to a targeted renal calyx.
Computational Models of Spreadsheet Development: Basis for Educational Approaches
Hodnigg, Karin; Mittermeir, Roland T
2008-01-01
Among the multiple causes of high error rates in spreadsheets, lack of proper training and of deep understanding of the computational model upon which spreadsheet computations rest might not be the least issue. The paper addresses this problem by presenting a didactical model focussing on cell interaction, thus exceeding the atomicity of cell computations. The approach is motivated by an investigation how different spreadsheet systems handle certain computational issues implied from moving cells, copy-paste operations, or recursion.
Energy Technology Data Exchange (ETDEWEB)
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.; Browning, Nigel D.
2016-10-17
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce the electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. The use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.
Heterogeneous Computing in Economics: A Simplified Approach
DEFF Research Database (Denmark)
Dziubinski, Matt P.; Grassi, Stefano
This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of the C++ Accelerated Massive Parallelism recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (2011) we document a ...
Molecular electromagnetism a computational chemistry approach
Sauer, Stephan P A
2011-01-01
A textbook for a one-semester course for students in chemistry physics and nanotechnology, this book examines the interaction of molecules with electric and magnetic fields as, for example in light. The book provides the necessary background knowledge for simulating these interactions on computers with modern quantum chemical software.
Ford, Logan; Bajpayee, Abhishek; Techet, Alexandra
2015-11-01
3D particle image velocimetry (PIV) is becoming a popular technique to study biological flows. PIV images that contain fish or other animals around which flow is being studied, need to be appropriately masked in order to remove the animal body from the 3D reconstructed volumes prior to calculating particle displacement vectors. Presented here is a machine learning and synthetic aperture (SA) refocusing based approach for more accurate masking of fish from reconstructed intensity fields for 3D PIV purposes. Using prior knowledge about the 3D shape and appearance of the fish along with SA refocused images at arbitrarily oriented focal planes, the location and orientation of a fish in a reconstructed volume can be accurately determined. Once the location and orientation of a fish in a volume is determined, it can be masked out.
Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3
Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.
2016-04-01
Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.
Milman, Mark H
2005-12-01
Astrometric measurements using stellar interferometry rely on precise measurement of the central white light fringe to accurately obtain the optical pathlength difference of incoming starlight to the two arms of the interferometer. One standard approach to stellar interferometry uses a channeled spectrum to determine phases at a number of different wavelengths that are then converted to the pathlength delay. When throughput is low these channels are broadened to improve the signal-to-noise ratio. Ultimately the ability to use monochromatic models and algorithms in each of the channels to extract phase becomes problematic and knowledge of the spectrum must be incorporated to achieve the accuracies required of the astrometric measurements. To accomplish this an optimization problem is posed to estimate simultaneously the pathlength delay and spectrum of the source. Moreover, the nature of the parameterization of the spectrum that is introduced circumvents the need to solve directly for these parameters so that the optimization problem reduces to a scalar problem in just the pathlength delay variable. A number of examples are given to show the robustness of the approach.
Soulez, Ferréol; Denis, Loïc; Fournier, Corinne; Thiébaut, Eric; Goepfert, Charles
2007-04-01
We propose a microparticle localization scheme in digital holography. Most conventional digital holography methods are based on Fresnel transform and present several problems such as twin-image noise, border effects, and other effects. To avoid these difficulties, we propose an inverse-problem approach, which yields the optimal particle set that best models the observed hologram image. We resolve this global optimization problem by conventional particle detection followed by a local refinement for each particle. Results for both simulated and real digital holograms show strong improvement in the localization of the particles, particularly along the depth dimension. In our simulations, the position precision is > or =1 microm rms. Our results also show that the localization precision does not deteriorate for particles near the edge of the field of view.
Computational Approach To Understanding Autism Spectrum Disorders
Directory of Open Access Journals (Sweden)
Włodzisław Duch
2012-01-01
Full Text Available Every year the prevalence of Autism Spectrum of Disorders (ASD is rising. Is there a unifying mechanism of various ASD cases at the genetic, molecular, cellular or systems level? The hypothesis advanced in this paper is focused on neural dysfunctions that lead to problems with attention in autistic people. Simulations of attractor neural networks performing cognitive functions help to assess system long-term neurodynamics. The Fuzzy Symbolic Dynamics (FSD technique is used for the visualization of attractors in the semantic layer of the neural model of reading. Large-scale simulations of brain structures characterized by a high order of complexity requires enormous computational power, especially if biologically motivated neuron models are used to investigate the inﬂuence of cellular structure dysfunctions on the network dynamics. Such simulations have to be implemented on computer clusters in a grid-based architectures
Music Genre Classification Systems - A Computational Approach
Ahrendt, Peter; Hansen, Lars Kai
2006-01-01
Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular...
Computational Enzymology, a ReaxFF approach
DEFF Research Database (Denmark)
Corozzi, Alessandro
This PhD project eassay is about the development of a new method to improve our understanding of enzyme catalysis with atomistic details. Currently the theory able to describe chemical systems and their reactivity is quantum mechanics (QM): electronic structure methods that use approximations of QM...... the simulation time. In short: on one hand we have accurate QM methods able to describe reactivity but limited in the size of the system to describe, while on the other hand we have molecular mechanics and ordinary force-fields that are virtually unlimited in size but unable to straightforwardly describe...... chemical reactivity. A reactive force-field (ReaxFF) is a simplified model that aims to bridge the gap between quantum chemistry methods to the ordinary force-fields of the classical molecular mechanics methods, enabling MM to model chemical reactions as a QM method with bond forming and breaking events...
Fu, Jian; Tan, Renbo; Chen, Liyuan
2014-01-01
X-ray differential phase-contrast computed tomography (DPC-CT) is a powerful physical and biochemical analysis tool. In practical applications, there are often challenges for DPC-CT due to insufficient data caused by few-view, bad or missing detector channels, or limited scanning angular range. They occur quite frequently because of experimental constraints from imaging hardware, scanning geometry, and the exposure dose delivered to living specimens. In this work, we analyze the influence of incomplete data on DPC-CT image reconstruction. Then, a reconstruction method is developed and investigated for incomplete data DPC-CT. It is based on an algebraic iteration reconstruction technique, which minimizes the image total variation and permits accurate tomographic imaging with less data. This work comprises a numerical study of the method and its experimental verification using a dataset measured at the W2 beamline of the storage ring DORIS III equipped with a Talbot-Lau interferometer. The numerical and experimental results demonstrate that the presented method can handle incomplete data. It will be of interest for a wide range of DPC-CT applications in medicine, biology, and nondestructive testing.
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Directory of Open Access Journals (Sweden)
Josué Pagán
2015-06-01
Full Text Available Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN. The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID that are capable of providing average forecast windows of 47 min and a low rate of false positives.
Kong, Hao; Ma, Zhuoran; Wang, Song; Gong, Xiaoyun; Zhang, Sichun; Zhang, Xinrong
2014-08-05
With the inspiration of an ancient Chinese poison test approach, we report a rapid hydrogen sulfide detection strategy in specific areas of live cells using silver needles with good spatial resolution of 2 × 2 μm(2). Besides the accurate-localization ability, this reflection-based strategy also has attractive merits of convenience and robust response when free pretreatment and short detection time are concerned. The success of endogenous H2S level evaluation in cellular cytoplasm and nuclear of human A549 cells promises the application potential of our strategy in scientific research and medical diagnosis.
Directory of Open Access Journals (Sweden)
C. Sun
2010-03-01
obtained from RS retrieval, which was in accordance with previous studies (Jamieson, 1982; Dugas and Ainsworth, 1985; Benson et al., 1992; Pereira and Nova, 1992.
After the data fusion, the correlation (R^{2}=0.8516 between the monthly runoff obtained from the simulation based on ET retrieval and the observed data was higher than that (R^{2}=0.8411 between the data obtained from the PM-based ET simulation and the observed data. As for the RMSE, the result (RMSE=26.0860 between the simulated runoff based on ET retrieval and the observed data was also superior to the result (RMSE=35.71904 between the simulated runoff obtained with PM-based ET and the observed data. As for the MBE parameter, the result (MBE=−8.6578 for the RS retrieval method was obviously better than that (MBE=−22.7313 for the PM-based method. The comparison of them showed that the RS retrieval had better adaptivity and higher accuracy than the PM-based method, and the new approach based on data fusion and the distributed hydrological model was feasible, reliable and worth being studied further.
Global computational algebraic topology approach for diffusion
Auclair-Fortier, Marie-Flavie; Ziou, Djemel; Allili, Madjid
2004-05-01
One physical process involved in many computer vision problems is the heat diffusion process. Such Partial differential equations are continuous and have to be discretized by some techniques, mostly mathematical processes like finite differences or finite elements. The continuous domain is subdivided into sub-domains in which there is only one value. The diffusion equation comes from the energy conservation then it is valid on a whole domain. We use the global equation instead of discretize the PDE obtained by a limit process on this global equation. To encode these physical global values over pixels of different dimensions, we use a computational algebraic topology (CAT)-based image model. This model has been proposed by Ziou and Allili and used for the deformation of curves and optical flow. It introduces the image support as a decomposition in terms of points, edges, surfaces, volumes, etc. Images of any dimensions can then be handled. After decomposing the physical principles of the heat transfer into basic laws, we recall the CAT-based image model and use it to encode the basic laws. We then present experimental results for nonlinear graylevel diffusion for denoising, ensuring thin features preservation.
Computational approaches to homogeneous gold catalysis.
Faza, Olalla Nieto; López, Carlos Silva
2015-01-01
Homogenous gold catalysis has been exploding for the last decade at an outstanding pace. The best described reactivity of Au(I) and Au(III) species is based on gold's properties as a soft Lewis acid, but new reactivity patterns have recently emerged which further expand the range of transformations achievable using gold catalysis, with examples of dual gold activation, hydrogenation reactions, or Au(I)/Au(III) catalytic cycles.In this scenario, to develop fully all these new possibilities, the use of computational tools to understand at an atomistic level of detail the complete role of gold as a catalyst is unavoidable. In this work we aim to provide a comprehensive review of the available benchmark works on methodological options to study homogenous gold catalysis in the hope that this effort can help guide the choice of method in future mechanistic studies involving gold complexes. This is relevant because a representative number of current mechanistic studies still use methods which have been reported as inappropriate and dangerously inaccurate for this chemistry.Together with this, we describe a number of recent mechanistic studies where computational chemistry has provided relevant insights into non-conventional reaction paths, unexpected selectivities or novel reactivity, which illustrate the complexity behind gold-mediated organic chemistry.
A complex network approach to cloud computing
Travieso, Gonzalo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura
2015-01-01
Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the users' tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlain by Erdos-Renyi and Barabasi-Albert topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of two indices: the cost of communication between the user and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter index, the ER topology provides better performance than the BA case for smaller average degrees and opposite behavior for larger average degrees. With respect to the cost, smaller values are found in the BA ...
Q-P Wave traveltime computation by an iterative approach
Ma, Xuxin
2013-01-01
In this work, we present a new approach to compute anisotropic traveltime based on solving successively elliptical isotropic traveltimes. The method shows good accuracy and is very simple to implement.
The fundamentals of computational intelligence system approach
Zgurovsky, Mikhail Z
2017-01-01
This monograph is dedicated to the systematic presentation of main trends, technologies and methods of computational intelligence (CI). The book pays big attention to novel important CI technology- fuzzy logic (FL) systems and fuzzy neural networks (FNN). Different FNN including new class of FNN- cascade neo-fuzzy neural networks are considered and their training algorithms are described and analyzed. The applications of FNN to the forecast in macroeconomics and at stock markets are examined. The book presents the problem of portfolio optimization under uncertainty, the novel theory of fuzzy portfolio optimization free of drawbacks of classical model of Markovitz as well as an application for portfolios optimization at Ukrainian, Russian and American stock exchanges. The book also presents the problem of corporations bankruptcy risk forecasting under incomplete and fuzzy information, as well as new methods based on fuzzy sets theory and fuzzy neural networks and results of their application for bankruptcy ris...
A polyhedral approach to computing border bases
Braun, Gábor
2009-01-01
Border bases can be considered to be the natural extension of Gr\\"obner bases that have several advantages. Unfortunately, to date the classical border basis algorithm relies on (degree-compatible) term orderings and implicitly on reduced Gr\\"obner bases. We adapt the classical border basis algorithm to allow for calculating border bases for arbitrary degree-compatible order ideals, which is \\emph{independent} from term orderings. Moreover, the algorithm also supports calculating degree-compatible order ideals with \\emph{preference} on contained elements, even though finding a preferred order ideal is NP-hard. Effectively we retain degree-compatibility only to successively extend our computation degree-by-degree. The adaptation is based on our polyhedral characterization: order ideals that support a border basis correspond one-to-one to integral points of the order ideal polytope. This establishes a crucial connection between the ideal and the combinatorial structure of the associated factor spaces.
Biologically motivated computationally intensive approaches to image pattern recognition
Petkov, Nikolay
1995-01-01
This paper presents some of the research activities of the research group in vision as a grand challenge problem whose solution is estimated to need the power of Tflop/s computers and for which computational methods have yet to be developed. The concerned approaches are biologically motivated, in th
An Approach to Dynamic Provisioning of Social and Computational Services
Bonino da Silva Santos, Luiz Olavo; Sorathia, Vikram; Ferreira Pires, Luis; Sinderen, van Marten
2010-01-01
Service-Oriented Computing (SOC) builds upon the intuitive notion of service already known and used in our society for a long time. SOC-related approaches are based on computer-executable functional units that often represent automation of services that exist at the social level, i.e., services at t
General approaches in ensemble quantum computing
Indian Academy of Sciences (India)
V Vimalan; N Chandrakumar
2008-01-01
We have developed methodology for NMR quantum computing focusing on enhancing the efficiency of initialization, of logic gate implementation and of readout. Our general strategy involves the application of rotating frame pulse sequences to prepare pseudopure states and to perform logic operations. We demonstrate experimentally our methodology for both homonuclear and heteronuclear spin ensembles. On model two-spin systems, the initialization time of one of our sequences is three-fourths (in the heteronuclear case) or one-fourth (in the homonuclear case), of the typical pulsed free precession sequences, attaining the same initialization efficiency. We have implemented the logical SWAP operation in homonuclear AMX spin systems using selective isotropic mixing, reducing the duration taken to a third compared to the standard re-focused INEPT-type sequence. We introduce the 1D version for readout of the rotating frame SWAP operation, in an attempt to reduce readout time. We further demonstrate the Hadamard mode of 1D SWAP, which offers 2N-fold reduction in experiment time for a system with -working bits, attaining the same sensitivity as the standard 1D version.
Delay Computation Using Fuzzy Logic Approach
Directory of Open Access Journals (Sweden)
Ramasesh G. R.
2012-10-01
Full Text Available The paper presents practical application of fuzzy sets and system theory in predicting delay, with reasonable accuracy, a wide range of factors pertaining to construction projects. In this paper we shall use fuzzy logic to predict delays on account of Delayed supplies and Labor shortage. It is observed that the project scheduling software use either deterministic method or probabilistic method for computation of schedule durations, delays, lags and other parameters. In other words, these methods use only quantitative inputs leaving-out the qualitative aspects associated with individual activity of work. The qualitative aspect viz., the expertise of the mason or the lack of experience can have a significant impact on the assessed duration. Such qualitative aspects do not find adequate representation in the Project Scheduling software. A realistic project is considered for which a PERT chart has been prepared using showing all the major activities in reasonable detail. This project has been periodically updated until its completion. It is observed that some of the activities are delayed due to extraneous factors resulting in the overall delay of the project. The software has the capability to calculate the overall delay through CPM (Critical Path Method when each of the activity-delays is reported. We shall now demonstrate that by using fuzzy logic, these delays could have been predicted well in advance.
Numerical Methods for Stochastic Computations A Spectral Method Approach
Xiu, Dongbin
2010-01-01
The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth
Mutations that Cause Human Disease: A Computational/Experimental Approach
Energy Technology Data Exchange (ETDEWEB)
Beernink, P; Barsky, D; Pesavento, B
2006-01-11
International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which
Mobile Cloud Computing: A Review on Smartphone Augmentation Approaches
Abolfazli, Saeid; Gani, Abdullah
2012-01-01
Smartphones have recently gained significant popularity in heavy mobile processing while users are increasing their expectations toward rich computing experience. However, resource limitations and current mobile computing advancements hinder this vision. Therefore, resource-intensive application execution remains a challenging task in mobile computing that necessitates device augmentation. In this article, smartphone augmentation approaches are reviewed and classified in two main groups, namely hardware and software. Generating high-end hardware is a subset of hardware augmentation approaches, whereas conserving local resource and reducing resource requirements approaches are grouped under software augmentation methods. Our study advocates that consreving smartphones' native resources, which is mainly done via task offloading, is more appropriate for already-developed applications than new ones, due to costly re-development process. Cloud computing has recently obtained momentous ground as one of the major co...
Convergence Analysis of a Class of Computational Intelligence Approaches
Directory of Open Access Journals (Sweden)
Junfeng Chen
2013-01-01
Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.
What is intrinsic motivation? A typology of computational approaches
Directory of Open Access Journals (Sweden)
Pierre-Yves Oudeyer
2009-11-01
Full Text Available Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics.
An Integrated Computer-Aided Approach for Environmental Studies
DEFF Research Database (Denmark)
Gani, Rafiqul; Chen, Fei; Jaksland, Cecilia;
1997-01-01
A general framework for an integrated computer-aided approach to solve process design, control, and environmental problems simultaneously is presented. Physicochemical properties and their relationships to the molecular structure play an important role in the proposed integrated approach. The scope...... and applicability of the integrated approach is highlighted through examples involving estimation of properties and environmental pollution prevention. The importance of mixture effects on some environmentally important properties is also demonstrated....
Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li
2014-08-01
Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.
Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S.; Shirley, Eric L.; Prendergast, David
2017-03-01
Constrained-occupancy delta-self-consistent-field (Δ SCF ) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1 s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The Δ SCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle Δ SCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.
Immonen, Taina; Gibson, Richard; Leitner, Thomas; Miller, Melanie A; Arts, Eric J; Somersalo, Erkki; Calvetti, Daniela
2012-11-01
We present a new hybrid stochastic-deterministic, spatially distributed computational model to simulate growth competition assays on a relatively immobile monolayer of peripheral blood mononuclear cells (PBMCs), commonly used for determining ex vivo fitness of human immunodeficiency virus type-1 (HIV-1). The novel features of our approach include incorporation of viral diffusion through a deterministic diffusion model while simulating cellular dynamics via a stochastic Markov chain model. The model accounts for multiple infections of target cells, CD4-downregulation, and the delay between the infection of a cell and the production of new virus particles. The minimum threshold level of infection induced by a virus inoculum is determined via a series of dilution experiments, and is used to determine the probability of infection of a susceptible cell as a function of local virus density. We illustrate how this model can be used for estimating the distribution of cells infected by either a single virus type or two competing viruses. Our model captures experimentally observed variation in the fitness difference between two virus strains, and suggests a way to minimize variation and dual infection in experiments.
Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M
2012-07-01
We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking.
Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools
DEFF Research Database (Denmark)
Caspersen, Michael E.; Nowack, Palle
2014-01-01
Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...... thinking. We present two main theses on which the subject is based, and we present the included knowledge areas and didactical design principles. Finally we summarize the status and future plans for the subject and related development projects....
An approach to computing direction relations between separated object groups
Yan, H.; Wang, Z.; Li, J.
2013-09-01
Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.
Computational experiment approach to advanced secondary mathematics curriculum
Abramovich, Sergei
2014-01-01
This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...
The process group approach to reliable distributed computing
Birman, Kenneth P.
1992-01-01
The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.
Computational biomechanics for medicine new approaches and new applications
Miller, Karol; Wittek, Adam; Nielsen, Poul
2015-01-01
The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologiesand advancements. Thisvolumecomprises twelve of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, France, Spain and Switzerland. Some of the interesting topics discussed are:real-time simulations; growth and remodelling of soft tissues; inverse and meshless solutions; medical image analysis; and patient-specific solid mechanics simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.
A distributed computing approach to mission operations support. [for spacecraft
Larsen, R. L.
1975-01-01
Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.
Directory of Open Access Journals (Sweden)
Zhiheng Wang
Full Text Available The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database.The DisoMCS is available at http://cal.tongji.edu.cn/disorder/.
Oyedepo, Gbenga A; Wilson, Angela K
2010-08-26
The correlation consistent Composite Approach, ccCA [ Deyonker , N. J. ; Cundari , T. R. ; Wilson , A. K. J. Chem. Phys. 2006 , 124 , 114104 ] has been demonstrated to predict accurate thermochemical properties of chemical species that can be described by a single configurational reference state, and at reduced computational cost, as compared with ab initio methods such as CCSD(T) used in combination with large basis sets. We have developed three variants of a multireference equivalent of this successful theoretical model. The method, called the multireference correlation consistent composite approach (MR-ccCA), is designed to predict the thermochemical properties of reactive intermediates, excited state species, and transition states to within chemical accuracy (e.g., 1 kcal/mol for enthalpies of formation) of reliable experimental values. In this study, we have demonstrated the utility of MR-ccCA: (1) in the determination of the adiabatic singlet-triplet energy separations and enthalpies of formation for the ground states for a set of diradicals and unsaturated compounds, and (2) in the prediction of energetic barriers to internal rotation, in ethylene and its heavier congener, disilene. Additionally, we have utilized MR-ccCA to predict the enthalpies of formation of the low-lying excited states of all the species considered. MR-ccCA is shown to give quantitative results without reliance upon empirically derived parameters, making it suitable for application to study novel chemical systems with significant nondynamical correlation effects.
Martin, Y. L.
The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species, of the nature of the molecular site, and of the type of nucleus, but
Directory of Open Access Journals (Sweden)
Corrado Lodovico Galli
Full Text Available Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs and their metabolites towards the ligand binding domain (LBD of the androgen receptor (AR in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three. This evidence suggests that, in order not to over-/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy.
A Unitifed Computational Approach to Oxide Aging Processes
Energy Technology Data Exchange (ETDEWEB)
Bowman, D.J.; Fleetwood, D.M.; Hjalmarson, H.P.; Schultz, P.A.
1999-01-27
In this paper we describe a unified, hierarchical computational approach to aging and reliability problems caused by materials changes in the oxide layers of Si-based microelectronic devices. We apply this method to a particular low-dose-rate radiation effects problem
A Computationally Based Approach to Homogenizing Advanced Alloys
Energy Technology Data Exchange (ETDEWEB)
Jablonski, P D; Cowen, C J
2011-02-27
We have developed a computationally based approach to optimizing the homogenization heat treatment of complex alloys. The Scheil module within the Thermo-Calc software is used to predict the as-cast segregation present within alloys, and DICTRA (Diffusion Controlled TRAnsformations) is used to model the homogenization kinetics as a function of time, temperature and microstructural scale. We will discuss this approach as it is applied to both Ni based superalloys as well as the more complex (computationally) case of alloys that solidify with more than one matrix phase as a result of segregation. Such is the case typically observed in martensitic steels. With these alloys it is doubly important to homogenize them correctly, especially at the laboratory scale, since they are austenitic at high temperature and thus constituent elements will diffuse slowly. The computationally designed heat treatment and the subsequent verification real castings are presented.
Computer Forensics for Graduate Accountants: A Motivational Curriculum Design Approach
Directory of Open Access Journals (Sweden)
Grover Kearns
2010-06-01
Full Text Available Computer forensics involves the investigation of digital sources to acquire evidence that can be used in a court of law. It can also be used to identify and respond to threats to hosts and systems. Accountants use computer forensics to investigate computer crime or misuse, theft of trade secrets, theft of or destruction of intellectual property, and fraud. Education of accountants to use forensic tools is a goal of the AICPA (American Institute of Certified Public Accountants. Accounting students, however, may not view information technology as vital to their career paths and need motivation to acquire forensic knowledge and skills. This paper presents a curriculum design methodology for teaching graduate accounting students computer forensics. The methodology is tested using perceptions of the students about the success of the methodology and their acquisition of forensics knowledge and skills. An important component of the pedagogical approach is the use of an annotated list of over 50 forensic web-based tools.
Efficient Approach for Load Balancing in Virtual Cloud Computing Environment
Directory of Open Access Journals (Sweden)
Harvinder singh
2014-10-01
Full Text Available Cloud computing technology is changing the focus of IT world and it is becoming famous because of its great characteristics. Load balancing is one of the main challenges in cloud computing for distributing workloads across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources. Successful load balancing optimizes resource use, maximizes throughput, minimizes response time, and avoids overload. The objective of this paper to propose an approach for scheduling algorithms that can maintain the load balancing and provides better improved strategies through efficient job scheduling and modified resource allocation techniques. The results discussed in this paper, based on existing round robin, least connection, throttled load balance, fastest response time and a new proposed algorithm fastest with least connection scheduling algorithms. This new algorithm identifies the overall response time and data centre processing time is improved as well as cost is reduced in comparison to the existing scheduling parameters.
EFFICIENT APPROACH FOR LOAD BALANCING IN VIRTUAL CLOUD COMPUTING ENVIRONMENT
Directory of Open Access Journals (Sweden)
Harvinder Singh
2015-10-01
Full Text Available Cloud computing technology is changing the focus of IT world and it is becoming famous because of its great characteristics. Load balancing is one of the main challenges in cloud computing for distributing workloads across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources. Successful load balancing optimizes resource use, maximizes throughput, minimizes response time, and avoids overload. The objective of this paper to propose an approach for scheduling algorithms that can maintain the load balancing and provides better improved strategies through efficient job scheduling and modified resource allocation techniques. The results discussed in this paper, based on existing round robin, least connection, throttled load balance, fastest response time and a new proposed algorithm fastest with least connection scheduling algorithms. This new algorithm identifies the overall response time and data centre processing time is improved as well as cost is reduced in comparison to the existing scheduling parameters.
A GPU-Computing Approach to Solar Stokes Profile Inversion
Harker, Brian J
2012-01-01
We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS (GENEtic Stokes Inversion Strategy), employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units GPUs, along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disc maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel genetic algorithm with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disc vector ma...
Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan
2014-08-14
In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.
Computational intelligence approaches for pattern discovery in biological systems.
Fogel, Gary B
2008-07-01
Biology, chemistry and medicine are faced by tremendous challenges caused by an overwhelming amount of data and the need for rapid interpretation. Computational intelligence (CI) approaches such as artificial neural networks, fuzzy systems and evolutionary computation are being used with increasing frequency to contend with this problem, in light of noise, non-linearity and temporal dynamics in the data. Such methods can be used to develop robust models of processes either on their own or in combination with standard statistical approaches. This is especially true for database mining, where modeling is a key component of scientific understanding. This review provides an introduction to current CI methods, their application to biological problems, and concludes with a commentary about the anticipated impact of these approaches in bioinformatics.
Lee, Y. C.; Thompson, H. M.; Gaskell, P. H.
2009-12-01
FILMPAR is a highly efficient and portable parallel multigrid algorithm for solving a discretised form of the lubrication approximation to three-dimensional, gravity-driven, continuous thin film free-surface flow over substrates containing micro-scale topography. While generally applicable to problems involving heterogeneous and distributed features, for illustrative purposes the algorithm is benchmarked on a distributed memory IBM BlueGene/P computing platform for the case of flow over a single trench topography, enabling direct comparison with complementary experimental data and existing serial multigrid solutions. Parallel performance is assessed as a function of the number of processors employed and shown to lead to super-linear behaviour for the production of mesh-independent solutions. In addition, the approach is used to solve for the case of flow over a complex inter-connected topographical feature and a description provided of how FILMPAR could be adapted relatively simply to solve for a wider class of related thin film flow problems. Program summaryProgram title: FILMPAR Catalogue identifier: AEEL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 530 421 No. of bytes in distributed program, including test data, etc.: 1 960 313 Distribution format: tar.gz Programming language: C++ and MPI Computer: Desktop, server Operating system: Unix/Linux Mac OS X Has the code been vectorised or parallelised?: Yes. Tested with up to 128 processors RAM: 512 MBytes Classification: 12 External routines: GNU C/C++, MPI Nature of problem: Thin film flows over functional substrates containing well-defined single and complex topographical features are of enormous significance, having a wide variety of engineering
Cloud computing approaches to accelerate drug discovery value chain.
Garg, Vibhav; Arora, Suchir; Gupta, Chitra
2011-12-01
Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.
An engineering based approach for hydraulic computations in river flows
Di Francesco, S.; Biscarini, C.; Pierleoni, A.; Manciola, P.
2016-06-01
This paper presents an engineering based approach for hydraulic risk evaluation. The aim of the research is to identify a criteria for the choice of the simplest and appropriate model to use in different scenarios varying the characteristics of main river channel. The complete flow field, generally expressed in terms of pressure, velocities, accelerations can be described through a three dimensional approach that consider all the flow properties varying in all directions. In many practical applications for river flow studies, however, the greatest changes occur only in two dimensions or even only in one. In these cases the use of simplified approaches can lead to accurate results, with easy to build and faster simulations. The study has been conducted taking in account a dimensionless parameter of channels (ratio of curvature radius and width of the channel (R/B).
Neuromolecular computing: a new approach to human brain evolution.
Wallace, R; Price, H
1999-09-01
Evolutionary approaches in human cognitive neurobiology traditionally emphasize macroscopic structures. It may soon be possible to supplement these studies with models of human information-processing of the molecular level. Thin-film, simulation, fluorescence microscopy, and high-resolution X-ray crystallographic studies provide evidence for transiently organized neural membrane molecular systems with possible computational properties. This review article examines evidence for hydrophobic-mismatch molecular interactions within phospholipid microdomains of a neural membrane bilayer. It is proposed that these interactions are a massively parallel algorithm which can rapidly compute near-optimal solutions to complex cognitive and physiological problems. Coupling of microdomain activity to permenant ion movements at ligand-gated and voltage-gated channels permits the conversion of molecular computations into neuron frequency codes. Evidence for microdomain transport of proteins to specific locations within the bilayer suggests that neuromolecular computation may be under some genetic control and thus modifiable by natural selection. A possible experimental approach for examining evolutionary changes in neuromolecular computation is briefly discussed.
Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing
2016-10-01
We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.
A computational approach to chemical etiologies of diabetes
DEFF Research Database (Denmark)
Audouze, Karine Marie Laure; Brunak, Søren; Grandjean, Philippe
2013-01-01
Computational meta-analysis can link environmental chemicals to genes and proteins involved in human diseases, thereby elucidating possible etiologies and pathogeneses of non-communicable diseases. We used an integrated computational systems biology approach to examine possible pathogenetic...... linkages in type 2 diabetes (T2D) through genome-wide associations, disease similarities, and published empirical evidence. Ten environmental chemicals were found to be potentially linked to T2D, the highest scores were observed for arsenic, 2,3,7,8-tetrachlorodibenzo-p-dioxin, hexachlorobenzene...
An evolutionary computational approach for the dynamic Stackelberg competition problems
Directory of Open Access Journals (Sweden)
Lorena Arboleda-Castro
2016-06-01
Full Text Available Stackelberg competition models are an important family of economical decision problems from game theory, in which the main goal is to find optimal strategies between two competitors taking into account their hierarchy relationship. Although these models have been widely studied in the past, it is important to note that very few works deal with uncertainty scenarios, especially those that vary over time. In this regard, the present research studies this topic and proposes a computational method for solving efficiently dynamic Stackelberg competition models. The computational experiments suggest that the proposed approach is effective for problems of this nature.
One approach for evaluating the Distributed Computing Design System (DCDS)
Ellis, J. T.
1985-01-01
The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.
The DYNAMO Simulation Language--An Alternate Approach to Computer Science Education.
Bronson, Richard
1986-01-01
Suggests the use of computer simulation of continuous systems as a problem solving approach to computer languages. Outlines the procedures that the system dynamics approach employs in computer simulations. Explains the advantages of the special purpose language, DYNAMO. (ML)
A computational thermodynamics approach to the Gibbs-Thomson effect
Energy Technology Data Exchange (ETDEWEB)
Shahandeh, Sina [Department of Material Science and Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)]. E-mail: sinashahandeh@yahoo.com; Nategh, Said [Department of Material Science and Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)
2007-01-15
In two-phase system, curvature of interface leads to increase of solute concentration in matrix. This effect plays a significant role in solidification, precipitation, nucleation and growth and coarsening. There are number of models and formulas for Gibbs-Thomson effect in binary alloys. In this paper with the help of CALPHAD calculations, new approach for describing this effect in binary and multicomponent systems is proposed. In this generalized method no traditional simplifying assumption are considered and this yield to more accurate result for Gibbs-Thomson phenomenon. This model is compared with previous formulas in some case alloying systems.
New Approaches to Practical Secure Two-Party Computation
DEFF Research Database (Denmark)
Nordholt, Peter Sebastian
We present two new approaches to maliciously secure two-party computation with practical efficiency: • First, we present the first maliciously secure two-party computation protocol with practical efficiency based on the classic semi-honest protocol given by Goldreich et al. at STOC 1987. Before now...... yielding a protocol of high practical efficiency. • As a bi-product of these two new protocols for secure two-party computations we develop two new cryptographic tools of independent interest: for the first protocol we give a highly practical OT-extension protocol that, apart from a few OTs to bootstrap...... all practical protocols with malicious security were based on Yao’s garbled circuits. We report on an implementation of this protocol demonstrating its high efficiency. For larger circuits it evaluates 20000 Boolean gates per second. As an example, evaluating one oblivious AES encryption (around 34000...
Huré, J -M
2016-01-01
We compute the structure of a self-gravitating torus with polytropic equation-of-state (EOS) rotating in an imposed centrifugal potential. The Poisson-solver is based on isotropic multigrid with optimal covering factor (fluid section-to-grid area ratio). We work at $2$nd-order in the grid resolution for both finite difference and quadrature schemes. For soft EOS (i.e. polytropic index $n \\ge 1$), the underlying $2$nd-order is naturally recovered for Boundary Values (BVs) and any other integrated quantity sensitive to the mass density (mass, angular momentum, volume, Virial Parameter, etc.), i.e. errors vary with the number $N$ of nodes per direction as $\\sim 1/N^2$. This is, however, not observed for purely geometrical quantities (surface area, meridional section area, volume), unless a subgrid approach is considered (i.e. boundary detection). Equilibrium sequences are also much better described, especially close to critical rotation. Yet another technical effort is required for hard EOS ($n < 1$), due to ...
Review: the physiological and computational approaches for atherosclerosis treatment.
Wang, Wuchen; Lee, Yugyung; Lee, Chi H
2013-09-01
The cardiovascular disease has long been an issue that causes severe loss in population, especially those conditions associated with arterial malfunction, being attributable to atherosclerosis and subsequent thrombotic formation. This article reviews the physiological mechanisms that underline the transition from plaque formation in atherosclerotic process to platelet aggregation and eventually thrombosis. The physiological and computational approaches, such as percutaneous coronary intervention and stent design modeling, to detect, evaluate and mitigate this malicious progression were also discussed.
Computer Mechatronics: A Radical Approach to Mechatronics Education
Nilsson, Martin
2005-01-01
This paper describes some distinguishing features of a course on mechatronics, based on computer science. We propose a teaching approach called Controlled Problem-Based Learning (CPBL). We have applied this method on three generations (2003-2005) of mainly fourth-year undergraduate students at Lund University (LTH). Although students found the course difficult, there were no dropouts, and all students attended the examination 2005.
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...
A spline-based approach for computing spatial impulse responses.
Ellis, Michael A; Guenther, Drake; Walker, William F
2007-05-01
Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.
Archiving Software Systems: Approaches to Preserve Computational Capabilities
King, T. A.
2014-12-01
A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.
Gunaydin, Hakan; Acevedo, Orlando; Jorgensen, William L; Houk, K N
2007-05-01
The energetics of methyl-transfer reactions from dimethylammonium, tetramethylammonium, and trimethylsulfonium to dimethylamine were computed with density functional theory, MP2, CBS-QB3, and quantum mechanics/molecular mechanics (QM/MM) Monte Carlo methods. At the CBS-QB3 level, the gas-phase activation enthalpies are computed to be 9.9, 15.3, and 7.9 kcal/mol, respectively. MP2/6-31+G(d,p) activation enthalpies are in best agreement with the CBS-QB3 results. The effects of aqueous solvation on these reactions were studied with polarizable continuum model, generalized Born/surface area (GB/SA), and QM/MM Monte Carlo simulations utilizing free-energy perturbation theory in which the PDDG/PM3 semiempirical Hamiltonian for the QM and explicit TIP4P water molecules in the MM region were used. In the aqueous phase, all of these reactions proceed more slowly when compared to the gas phase, since the charged reactants are stabilized more than the transition structure geometries with delocalized positive charges. In order to obtain the aqueous-phase activation free energies, the gas-phase activation free energies were corrected with the solvation free energies obtained from single-point conductor-like polarizable continuum model and GB/SA calculations for the stationary points along the reaction coordinate.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
Harb, Moussab
2015-08-26
Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.
A Computer Vision Approach to Identify Einstein Rings and Arcs
Lee, Chien-Hsiu
2017-03-01
Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.
Energy Technology Data Exchange (ETDEWEB)
Carrington, David Bradley [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Waters, Jiajia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-05
KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5th generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media
Understanding Plant Nitrogen Metabolism through Metabolomics and Computational Approaches
Directory of Open Access Journals (Sweden)
Perrin H. Beatty
2016-10-01
Full Text Available A comprehensive understanding of plant metabolism could provide a direct mechanism for improving nitrogen use efficiency (NUE in crops. One of the major barriers to achieving this outcome is our poor understanding of the complex metabolic networks, physiological factors, and signaling mechanisms that affect NUE in agricultural settings. However, an exciting collection of computational and experimental approaches has begun to elucidate whole-plant nitrogen usage and provides an avenue for connecting nitrogen-related phenotypes to genes. Herein, we describe how metabolomics, computational models of metabolism, and flux balance analysis have been harnessed to advance our understanding of plant nitrogen metabolism. We introduce a model describing the complex flow of nitrogen through crops in a real-world agricultural setting and describe how experimental metabolomics data, such as isotope labeling rates and analyses of nutrient uptake, can be used to refine these models. In summary, the metabolomics/computational approach offers an exciting mechanism for understanding NUE that may ultimately lead to more effective crop management and engineered plants with higher yields.
SPINET: A Parallel Computing Approach to Spine Simulations
Directory of Open Access Journals (Sweden)
Peter G. Kropf
1996-01-01
Full Text Available Research in scientitic programming enables us to realize more and more complex applications, and on the other hand, application-driven demands on computing methods and power are continuously growing. Therefore, interdisciplinary approaches become more widely used. The interdisciplinary SPINET project presented in this article applies modern scientific computing tools to biomechanical simulations: parallel computing and symbolic and modern functional programming. The target application is the human spine. Simulations of the spine help us to investigate and better understand the mechanisms of back pain and spinal injury. Two approaches have been used: the first uses the finite element method for high-performance simulations of static biomechanical models, and the second generates a simulation developmenttool for experimenting with different dynamic models. A finite element program for static analysis has been parallelized for the MUSIC machine. To solve the sparse system of linear equations, a conjugate gradient solver (iterative method and a frontal solver (direct method have been implemented. The preprocessor required for the frontal solver is written in the modern functional programming language SML, the solver itself in C, thus exploiting the characteristic advantages of both functional and imperative programming. The speedup analysis of both solvers show very satisfactory results for this irregular problem. A mixed symbolic-numeric environment for rigid body system simulations is presented. It automatically generates C code from a problem specification expressed by the Lagrange formalism using Maple.
Bedogni, Alberto; Fedele, Stefano; Bedogni, Giorgio; Scoletta, Matteo; Favia, Gianfranco; Colella, Giuseppe; Agrillo, Alessandro; Bettini, Giordana; Di Fede, Olga; Oteri, Giacomo; Fusco, Vittorio; Gabriele, Mario; Ottolenghi, Livia; Valsecchi, Stefano; Porter, Stephen; Petruzzi, Massimo; Arduino, Paolo; D'Amato, Salvatore; Ungari, Claudio; Fung Polly, Pok-Lam; Saia, Giorgia; Campisi, Giuseppina
2014-09-01
Management of osteonecrosis of the jaw associated with antiresorptive agents is challenging, and outcomes are unpredictable. The severity of disease is the main guide to management, and can help to predict prognosis. Most available staging systems for osteonecrosis, including the widely-used American Association of Oral and Maxillofacial Surgeons (AAOMS) system, classify severity on the basis of clinical and radiographic findings. However, clinical inspection and radiography are limited in their ability to identify the extent of necrotic bone disease compared with computed tomography (CT). We have organised a large multicentre retrospective study (known as MISSION) to investigate the agreement between the AAOMS staging system and the extent of osteonecrosis of the jaw (focal compared with diffuse involvement of bone) as detected on CT. We studied 799 patients with detailed clinical phenotyping who had CT images taken. Features of diffuse bone disease were identified on CT within all AAOMS stages (20%, 8%, 48%, and 24% of patients in stages 0, 1, 2, and 3, respectively). Of the patients classified as stage 0, 110/192 (57%) had diffuse disease on CT, and about 1 in 3 with CT evidence of diffuse bone disease was misclassified by the AAOMS system as having stages 0 and 1 osteonecrosis. In addition, more than a third of patients with AAOMS stage 2 (142/405, 35%) had focal bone disease on CT. We conclude that the AAOMS staging system does not correctly identify the extent of bony disease in patients with osteonecrosis of the jaw.
Dybeck, Eric C; Schieber, Natalie P; Shirts, Michael R
2016-08-09
We examine the free energies of three benzene polymorphs as a function of temperature in the point-charge OPLS-AA and GROMOS54A7 potentials as well as the polarizable AMOEBA09 potential. For this system, using a polarizable Hamiltonian instead of the cheaper point-charge potentials is shown to have a significantly smaller effect on the stability at 250 K than on the lattice energy at 0 K. The benzene I polymorph is found to be the most stable crystal structure in all three potentials examined and at all temperatures examined. For each potential, we report the free energies over a range of temperatures and discuss the added value of using full free energy methods over the minimized lattice energy to determine the relative crystal stability at finite temperatures. The free energies in the polarizable Hamiltonian are efficiently calculated using samples collected in a cheaper point-charge potential. The polarizable free energies are estimated from the point-charge trajectories using Boltzmann reweighting with MBAR. The high configuration-space overlap necessary for efficient Boltzmann reweighting is achieved by designing point-charge potentials with intramolecular parameters matching those in the expensive polarizable Hamiltonian. Finally, we compare the computational cost of this indirect reweighted free energy estimate to the cost of simulating directly in the expensive polarizable Hamiltonian.
Thomas, Peter B. M.; Baltrušaitis, Tadas; Robinson, Peter; Vivian, Anthony J.
2016-01-01
Purpose We validate a video-based method of head posture measurement. Methods The Cambridge Face Tracker uses neural networks (constrained local neural fields) to recognize facial features in video. The relative position of these facial features is used to calculate head posture. First, we assess the accuracy of this approach against videos in three research databases where each frame is tagged with a precisely measured head posture. Second, we compare our method to a commercially available mechanical device, the Cervical Range of Motion device: four subjects each adopted 43 distinct head postures that were measured using both methods. Results The Cambridge Face Tracker achieved confident facial recognition in 92% of the approximately 38,000 frames of video from the three databases. The respective mean error in absolute head posture was 3.34°, 3.86°, and 2.81°, with a median error of 1.97°, 2.16°, and 1.96°. The accuracy decreased with more extreme head posture. Comparing The Cambridge Face Tracker to the Cervical Range of Motion Device gave correlation coefficients of 0.99 (P < 0.0001), 0.96 (P < 0.0001), and 0.99 (P < 0.0001) for yaw, pitch, and roll, respectively. Conclusions The Cambridge Face Tracker performs well under real-world conditions and within the range of normally-encountered head posture. It allows useful quantification of head posture in real time or from precaptured video. Its performance is similar to that of a clinically validated mechanical device. It has significant advantages over other approaches in that subjects do not need to wear any apparatus, and it requires only low cost, easy-to-setup consumer electronics. Translational Relevance Noncontact assessment of head posture allows more complete clinical assessment of patients, and could benefit surgical planning in future. PMID:27730008
Computational approaches for rational design of proteins with novel functionalities.
Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul
2012-01-01
Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes.
Identifying Pathogenicity Islands in Bacterial Pathogenomics Using Computational Approaches
Directory of Open Access Journals (Sweden)
Dongsheng Che
2014-01-01
Full Text Available High-throughput sequencing technologies have made it possible to study bacteria through analyzing their genome sequences. For instance, comparative genome sequence analyses can reveal the phenomenon such as gene loss, gene gain, or gene exchange in a genome. By analyzing pathogenic bacterial genomes, we can discover that pathogenic genomic regions in many pathogenic bacteria are horizontally transferred from other bacteria, and these regions are also known as pathogenicity islands (PAIs. PAIs have some detectable properties, such as having different genomic signatures than the rest of the host genomes, and containing mobility genes so that they can be integrated into the host genome. In this review, we will discuss various pathogenicity island-associated features and current computational approaches for the identification of PAIs. Existing pathogenicity island databases and related computational resources will also be discussed, so that researchers may find it to be useful for the studies of bacterial evolution and pathogenicity mechanisms.
Benchmarking of computer codes and approaches for modeling exposure scenarios
Energy Technology Data Exchange (ETDEWEB)
Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)
1994-08-01
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.
Computational approaches for rational design of proteins with novel functionalities
Directory of Open Access Journals (Sweden)
Manish Kumar Tiwari
2012-09-01
Full Text Available Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes.
[Computer work and De Quervain's tenosynovitis: an evidence based approach].
Gigante, M R; Martinotti, I; Cirla, P E
2012-01-01
The debate around the role of the work at personal computer as cause of De Quervain's Tenosynovitis was developed partially, without considering multidisciplinary available data. A systematic review of the literature, using an evidence-based approach, was performed. In disorders associated with the use of VDU, we must distinguish those at the upper limbs and among them those related to an overload. Experimental studies on the occurrence of De Quervain's Tenosynovitis are quite limited, as well as clinically are quite difficult to prove the professional etiology, considering the interference due to other activities of daily living or to the biological susceptibility (i.e. anatomical variability, sex, age, exercise). At present there is no evidence of any connection between De Quervain syndrome and time of use of the personal computer or keyboard, limited evidence of correlation is found with time using a mouse. No data are available regarding the use exclusively or predominantly for personal laptops or mobile "smart phone".
Institute of Scientific and Technical Information of China (English)
Anjuli A Shah; Nisha I Sainani; Avinash Kambadakone Ramesh; Zarine K Shah; Vikram Deshpande; Peter F Hahn; Dushyant V Sahani
2009-01-01
AIM:To identify multi-detector computed tomography (MDCT) features mos t predi c t i ve of serous cystadenomas (SCAs),correlating with histopathology,and to study the impact of cyst size and MDCT technique on reader performance.METHODS:The MDCT scans of 164 patients with surgically verified pancreatic cystic lesions were reviewed by two readers to study the predictive value of various morphological features for establishing a diagnosis of SCAs.Accuracy in lesion characterization and reader confidence were correlated with lesion size (≤3 cm or ≥3 cm) and scanning protocols (dedicated vs routine).RESULTS:28/164 cysts (mean size,39 mm;range,8-92 mm) were diagnosed as SCA on pathology.The MDCT features predictive of diagnosis of SCA were microcystic appearance (22/28,78.6%),surface lobulations (25/28,89.3%) and central scar (9/28,32.4%).Stepwise logistic regression analysis showed that only microcystic appearance was significant for CT diagnosis of SCA (P=0.0001).The sensitivity,specificity and PPV of central scar and of combined microcystic appearance and lobulations were 32.4%/100%/100% and 68%/100%/100%,respectively.The reader confidence was higher for lesions>3 cm (P=0.02) and for MDCT scans performed using thin collimation (1.25-2.5 mm) compared to routine 5 mm collimation exams (P>0.05).CONCLUSION:Central scar on MDCT is diagnostic of SCA but is seen in only one third of SCAs.Microcystic morphology is the most significant CT feature in diagnosis of SCA.A combination of microcystic appearance and surface lobulations offers accuracy comparable to central scar with higher sensitivity.
Approaches to Computer Modeling of Phosphate Hide-Out.
1984-06-28
phosphate acts as a buffer to keep pH at a value above which acid corrosion occurs . and below which caustic corrosion becomes significant. Difficulties are...ionization of dihydrogen phosphate : HIPO - + + 1PO, K (B-7) H+ + - £Iao 1/1, (B-8) H , PO4 - + O- - H0 4 + H20 K/Kw (0-9) 19 * Such zero heat...OF STANDARDS-1963-A +. .0 0 0 9t~ - 4 NRL Memorandum Report 5361 4 Approaches to Computer Modeling of Phosphate Hide-Out K. A. S. HARDY AND J. C
Computational systems biology approaches to anti-angiogenic cancer therapeutics.
Finley, Stacey D; Chu, Liang-Hui; Popel, Aleksander S
2015-02-01
Angiogenesis is an exquisitely regulated process that is required for physiological processes and is also important in numerous diseases. Tumors utilize angiogenesis to generate the vascular network needed to supply the cancer cells with nutrients and oxygen, and many cancer drugs aim to inhibit tumor angiogenesis. Anti-angiogenic therapy involves inhibiting multiple cell types, molecular targets, and intracellular signaling pathways. Computational tools are useful in guiding treatment strategies, predicting the response to treatment, and identifying new targets of interest. Here, we describe progress that has been made in applying mathematical modeling and bioinformatics approaches to study anti-angiogenic therapeutics in cancer.
A Dynamic Bayesian Approach to Computational Laban Shape Quality Analysis
Directory of Open Access Journals (Sweden)
Dilip Swaminathan
2009-01-01
kinesiology. LMA (especially Effort/Shape emphasizes how internal feelings and intentions govern the patterning of movement throughout the whole body. As we argue, a complex understanding of intention via LMA is necessary for human-computer interaction to become embodied in ways that resemble interaction in the physical world. We thus introduce a novel, flexible Bayesian fusion approach for identifying LMA Shape qualities from raw motion capture data in real time. The method uses a dynamic Bayesian network (DBN to fuse movement features across the body and across time and as we discuss can be readily adapted for low-cost video. It has delivered excellent performance in preliminary studies comprising improvisatory movements. Our approach has been incorporated in Response, a mixed-reality environment where users interact via natural, full-body human movement and enhance their bodily-kinesthetic awareness through immersive sound and light feedback, with applications to kinesiology training, Parkinson's patient rehabilitation, interactive dance, and many other areas.
Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations
Institute of Scientific and Technical Information of China (English)
Junaid Ali Khan; Muhammad Asif Zahoor Raja; Ijaz Mansoor Qureshi
2011-01-01
@@ We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs).The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error.The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique.The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations.We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods.The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy.With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.%We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.
Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin
2016-07-01
Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor.
A new approach in CHP steam turbines thermodynamic cycles computations
Directory of Open Access Journals (Sweden)
Grković Vojin R.
2012-01-01
Full Text Available This paper presents a new approach in mathematical modeling of thermodynamic cycles and electric power of utility district-heating and cogeneration steam turbines. The approach is based on the application of the dimensionless mass flows, which describe the thermodynamic cycle of a combined heat and power steam turbine. The mass flows are calculated relative to the mass flow to low pressure turbine. The procedure introduces the extraction mass flow load parameter νh which clearly indicates the energy transformation process, as well as the cogeneration turbine design features, but also its fitness for the electrical energy system requirements. The presented approach allows fast computations, as well as direct calculation of the selected energy efficiency indicators. The approach is exemplified with the calculation results of the district heat power to electric power ratio, as well as the cycle efficiency, versus νh. The influence of νh on the conformity of a combined heat and power turbine to the grid requirements is also analyzed and discussed. [Projekat Ministarstva nauke Republike Srbije, br. 33049: Development of CHP demonstration plant with gasification of biomass
An Approach for Location privacy in Pervasive Computing Environment
Directory of Open Access Journals (Sweden)
Sudheer Kumar Singh
2010-05-01
Full Text Available This paper focus on location privacy in location based services, Location privacy is a particular type of information privacy that can be defined as the ability to prevent others from learning one’s current or past location. Many systems such as GPS implicitly and automatically give its users location privacy. Once user sends his or her current location to the application server, Application server stores current locations of users in application server database. User can not delete or modify his or her location data after sending once to application server. Addressing this problem, Here in this paper, we are giving theoretical concept for protecting location privacy in pervasive computing environment. This approach based on user anonymity based location privacy. Going through the basic user anonymity based a location privacy approach that uses trusted proxy. By analysis of this approach, we propose an improvement over it using dummy-locations of users and also dummies of requested services by users from the application server. In this paper, this approach reduces the user’s overheads to extracting necessary information from reply message coming from application server. In this approach, user send a message having (current location and ID+ requested service to the trusted proxy and trusted proxy generates dummies location related to current location and also generates temporary pseudonym corresponding to real ID of users. After Analysis of this approach wehave found on problem with requested service. Addressing this problem, we improve our method by using dummies of requested service generated by trusted proxy. Trusted proxy generated Dummies (false position by dummies location algorithms.
Novel computational approaches for the analysis of cosmic magnetic fields
Energy Technology Data Exchange (ETDEWEB)
Saveliev, Andrey [Universitaet Hamburg, Hamburg (Germany); Keldysh Institut, Moskau (Russian Federation)
2016-07-01
In order to give a consistent picture of cosmic, i.e. galactic and extragalactic, magnetic fields, different approaches are possible and often even necessary. Here we present three of them: First, a semianalytic analysis of the time evolution of primordial magnetic fields from which their properties and, subsequently, the nature of present-day intergalactic magnetic fields may be deduced. Second, the use of high-performance computing infrastructure by developing powerful algorithms for (magneto-)hydrodynamic simulations and applying them to astrophysical problems. We are currently developing a code which applies kinetic schemes in massive parallel computing on high performance multiprocessor systems in a new way to calculate both hydro- and electrodynamic quantities. Finally, as a third approach, astroparticle physics might be used as magnetic fields leave imprints of their properties on charged particles transversing them. Here we focus on electromagnetic cascades by developing a software based on CRPropa which simulates the propagation of particles from such cascades through the intergalactic medium in three dimensions. This may in particular be used to obtain information about the helicity of extragalactic magnetic fields.
Computational approaches to understand cardiac electrophysiology and arrhythmias
Roberts, Byron N.; Yang, Pei-Chi; Behrens, Steven B.; Moreno, Jonathan D.
2012-01-01
Cardiac rhythms arise from electrical activity generated by precisely timed opening and closing of ion channels in individual cardiac myocytes. These impulses spread throughout the cardiac muscle to manifest as electrical waves in the whole heart. Regularity of electrical waves is critically important since they signal the heart muscle to contract, driving the primary function of the heart to act as a pump and deliver blood to the brain and vital organs. When electrical activity goes awry during a cardiac arrhythmia, the pump does not function, the brain does not receive oxygenated blood, and death ensues. For more than 50 years, mathematically based models of cardiac electrical activity have been used to improve understanding of basic mechanisms of normal and abnormal cardiac electrical function. Computer-based modeling approaches to understand cardiac activity are uniquely helpful because they allow for distillation of complex emergent behaviors into the key contributing components underlying them. Here we review the latest advances and novel concepts in the field as they relate to understanding the complex interplay between electrical, mechanical, structural, and genetic mechanisms during arrhythmia development at the level of ion channels, cells, and tissues. We also discuss the latest computational approaches to guiding arrhythmia therapy. PMID:22886409
Computational Approach to Dendritic Spine Taxonomy and Shape Transition Analysis
Bokota, Grzegorz; Magnowska, Marta; Kuśmierczyk, Tomasz; Łukasik, Michał; Roszkowska, Matylda; Plewczynski, Dariusz
2016-01-01
The common approach in morphological analysis of dendritic spines of mammalian neuronal cells is to categorize spines into subpopulations based on whether they are stubby, mushroom, thin, or filopodia shaped. The corresponding cellular models of synaptic plasticity, long-term potentiation, and long-term depression associate the synaptic strength with either spine enlargement or spine shrinkage. Although a variety of automatic spine segmentation and feature extraction methods were developed recently, no approaches allowing for an automatic and unbiased distinction between dendritic spine subpopulations and detailed computational models of spine behavior exist. We propose an automatic and statistically based method for the unsupervised construction of spine shape taxonomy based on arbitrary features. The taxonomy is then utilized in the newly introduced computational model of behavior, which relies on transitions between shapes. Models of different populations are compared using supplied bootstrap-based statistical tests. We compared two populations of spines at two time points. The first population was stimulated with long-term potentiation, and the other in the resting state was used as a control. The comparison of shape transition characteristics allowed us to identify the differences between population behaviors. Although some extreme changes were observed in the stimulated population, statistically significant differences were found only when whole models were compared. The source code of our software is freely available for non-commercial use1. Contact: d.plewczynski@cent.uw.edu.pl. PMID:28066226
Genetic braid optimization: A heuristic approach to compute quasiparticle braids
McDonald, Ross B.; Katzgraber, Helmut G.
2013-02-01
In topologically protected quantum computation, quantum gates can be carried out by adiabatically braiding two-dimensional quasiparticles, reminiscent of entangled world lines. Bonesteel [Phys. Rev. Lett.10.1103/PhysRevLett.95.140503 95, 140503 (2005)], as well as Leijnse and Flensberg [Phys. Rev. B10.1103/PhysRevB.86.104511 86, 104511 (2012)], recently provided schemes for computing quantum gates from quasiparticle braids. Mathematically, the problem of executing a gate becomes that of finding a product of the generators (matrices) in that set that approximates the gate best, up to an error. To date, efficient methods to compute these gates only strive to optimize for accuracy. We explore the possibility of using a generic approach applicable to a variety of braiding problems based on evolutionary (genetic) algorithms. The method efficiently finds optimal braids while allowing the user to optimize for the relative utilities of accuracy and/or length. Furthermore, when optimizing for error only, the method can quickly produce efficient braids.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques
Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.
2016-06-01
In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.
Kostoglou, K.; Hadjipapas, A.; Lowet, E.; Roberts, M.; de Weerd, P.; Mitsis, G.D.
2014-01-01
Aims: The relationship between collective population activity (LFP) and spikes underpins network computation, yet it remains poorly understood. Previous studies utilized pre-defined LFP features to predict spiking from simultaneously recorded LFP, and have reported good prediction of spike bursts bu
A Novel Approach of Load Balancing in Cloud Computing using Computational Intelligence
Directory of Open Access Journals (Sweden)
Shabnam Sharma
2016-02-01
Full Text Available Nature Inspired Meta-Heuristic algorithms are proved to be beneficial for solving real world combinatorial problems such as minimum spanning tree, knapsack problem, process planning problems, load balancing and many more. In this research work, existing meta-heuristic approaches are discussed. Due to astonishing feature of echolocation, bat algorithm has drawn major attention in recent years and is applicable in different applications such vehicle routing optimization, time-tabling in railway optimization problems, load balancing in cloud computing etc. Later, the biological behaviour of bats is explored and various areas of further research are discussed. Finally, the main objective of the research paper is to propose an algorithm for one of the most important application, which is load balancing in cloud computing environment.
Suggested Approaches to the Measurement of Computer Anxiety.
Toris, Carol
Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…
Directory of Open Access Journals (Sweden)
Coen Pramono D
2005-03-01
Full Text Available Functional and aesthetic dysgnathia surgery requires accurate pre-surgical planning, including the surgical technique to be used related with the difference of anatomical structures amongst individuals. Programs that simulate the surgery become increasingly important. This can be mediated by using a surgical model, conventional x-rays as panoramic, cephalometric projections and another sophisticated method such as a three dimensional computed tomography (3 D-CT. A patient who had undergone double jaw surgeries with difficult anatomical landmarks was presented. In this case the mandible foramens were seen highly relatively related to the sigmoid notches. Therefore, ensuring the bone incisions in sagittal split was presumed to be difficult. A 3D-CT was made and considered to be very helpful in supporting the pre-operative diagnostic.
Computational approaches to substrate-based cell motility
Ziebert, Falko; Aranson, Igor S.
2016-07-01
Substrate-based crawling motility of eukaryotic cells is essential for many biological functions, both in developing and mature organisms. Motility dysfunctions are involved in several life-threatening pathologies such as cancer and metastasis. Motile cells are also a natural realisation of active, self-propelled 'particles', a popular research topic in nonequilibrium physics. Finally, from the materials perspective, assemblies of motile cells and evolving tissues constitute a class of adaptive self-healing materials that respond to the topography, elasticity and surface chemistry of the environment and react to external stimuli. Although a comprehensive understanding of substrate-based cell motility remains elusive, progress has been achieved recently in its modelling on the whole-cell level. Here we survey the most recent advances in computational approaches to cell movement and demonstrate how these models improve our understanding of complex self-organised systems such as living cells.
Leaching from Heterogeneous Heck Catalysts: A Computational Approach
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
The possibility of carrying out a purely heterogeneous Heck reaction in practice without Pd leaching has been previously considered by a number of research groups but no general consent has yet arrived. Here, the reaction was, for the first time, evaluated by a simple computational approach. Modelling experiments were performed on one of the initial catalytic steps: phenyl halides attachment on Pd (111) to (100) and (111) to (111) ridges of a Pd crystal. Three surface structures of resulting [PhPdX] were identified as possible reactive intermediates. Following potential energy minimisation calculations based on a universal force field, the relative stabilities of these surface species were then determined. Results showed the most stable species to be one in which a Pd ridge atom is removed from the Pd crystal structure, suggesting Pd leaching induced by phenyl halides is energetically favourable.
Systems approaches to computational modeling of the oral microbiome
Directory of Open Access Journals (Sweden)
Dimiter V. Dimitrov
2013-07-01
Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders
Computer Aided Interpretation Approach for Optical Tomographic Images
Klose, Christian D; Netz, Uwe; Beuthan, Juergen; Hielscher, Andreas H
2010-01-01
A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) of human finger joints in optical tomographic images. The image interpretation method employs a multi-variate signal detection analysis aided by a machine learning classification algorithm, called Self-Organizing Mapping (SOM). Unlike in previous studies, this allows for combining multiple physical image parameters, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging and inspection of optical tomographic images), were used as "ground truth"-benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities while...
Computer Modeling of Violent Intent: A Content Analysis Approach
Energy Technology Data Exchange (ETDEWEB)
Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.
2014-01-03
We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.
Local-basis-function approach to computed tomography
Hanson, K. M.; Wecksung, G. W.
1985-12-01
In the local basis-function approach, a reconstruction is represented as a linear expansion of basis functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computational effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integrals over a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying local constraints on reconstruction values, such as upper and lower limits. Since a reconstruction is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function.
Examples of computational approaches for elliptic, possibly multiscale PDEs with random inputs
Le Bris, Claude; Legoll, Frédéric
2017-01-01
We overview a series of recent works addressing numerical simulations of partial differential equations in the presence of some elements of randomness. The specific equations manipulated are linear elliptic, and arise in the context of multiscale problems, but the purpose is more general. On a set of prototypical situations, we investigate two critical issues present in many settings: variance reduction techniques to obtain sufficiently accurate results at a limited computational cost when solving PDEs with random coefficients, and finite element techniques that are sufficiently flexible to carry over to geometries with random fluctuations. Some elements of theoretical analysis and numerical analysis are briefly mentioned. Numerical experiments, although simple, provide convincing evidence of the efficiency of the approaches.
Cloud computing approaches for prediction of ligand binding poses and pathways.
Lawrenz, Morgan; Shukla, Diwakar; Pande, Vijay S
2015-01-22
We describe an innovative protocol for ab initio prediction of ligand crystallographic binding poses and highly effective analysis of large datasets generated for protein-ligand dynamics. We include a procedure for setup and performance of distributed molecular dynamics simulations on cloud computing architectures, a model for efficient analysis of simulation data, and a metric for evaluation of model convergence. We give accurate binding pose predictions for five ligands ranging in affinity from 7 nM to > 200 μM for the immunophilin protein FKBP12, for expedited results in cases where experimental structures are difficult to produce. Our approach goes beyond single, low energy ligand poses to give quantitative kinetic information that can inform protein engineering and ligand design.
Towards scalable quantum communication and computation: Novel approaches and realizations
Jiang, Liang
Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as
Computer-Aided Approaches for Targeting HIVgp41
Directory of Open Access Journals (Sweden)
William J. Allen
2012-08-01
Full Text Available Virus-cell fusion is the primary means by which the human immunodeficiency virus-1 (HIV delivers its genetic material into the human T-cell host. Fusion is mediated in large part by the viral glycoprotein 41 (gp41 which advances through four distinct conformational states: (i native, (ii pre-hairpin intermediate, (iii fusion active (fusogenic, and (iv post-fusion. The pre-hairpin intermediate is a particularly attractive step for therapeutic intervention given that gp41 N-terminal heptad repeat (NHR and C‑terminal heptad repeat (CHR domains are transiently exposed prior to the formation of a six-helix bundle required for fusion. Most peptide-based inhibitors, including the FDA‑approved drug T20, target the intermediate and there are significant efforts to develop small molecule alternatives. Here, we review current approaches to studying interactions of inhibitors with gp41 with an emphasis on atomic-level computer modeling methods including molecular dynamics, free energy analysis, and docking. Atomistic modeling yields a unique level of structural and energetic detail, complementary to experimental approaches, which will be important for the design of improved next generation anti-HIV drugs.
Directory of Open Access Journals (Sweden)
Vineet Kumar
2016-01-01
Full Text Available Iris segmentation in the iris recognition systems is a challenging task under noncooperative environments. The iris segmentation is a process of detecting the pupil, iris’s outer boundary, and eyelids in the iris image. In this paper, we propose a pupil localization method for locating the pupils in the non-close-up and frontal-view iris images that are captured under near-infrared (NIR illuminations and contain the noise, such as specular and lighting reflection spots, eyeglasses, nonuniform illumination, low contrast, and occlusions by the eyelids, eyelashes, and eyebrow hair. In the proposed method, first, a novel edge-map is created from the iris image, which is based on combining the conventional thresholding and edge detection based segmentation techniques, and then, the general circular Hough transform (CHT is used to find the pupil circle parameters in the edge-map. Our main contribution in this research is a novel edge-map creation technique, which reduces the false edges drastically in the edge-map of the iris image and makes the pupil localization in the noisy NIR images more accurate, fast, robust, and simple. The proposed method was tested with three iris databases: CASIA-Iris-Thousand (version 4.0, CASIA-Iris-Lamp (version 3.0, and MMU (version 2.0. The average accuracy of the proposed method is 99.72% and average time cost per image is 0.727 sec.
Human Computer Interaction Approach in Developing Customer Relationship Management
Directory of Open Access Journals (Sweden)
Mohd H.N.M. Nasir
2008-01-01
Full Text Available Problem statement: Many published studies have found that more than 50% of Customer Relationship Management (CRM system implementations have failed due to the failure of system usability and does not fulfilled user expectation. This study presented the issues that contributed to the failures of CRM system and proposed a prototype of CRM system developed using Human Computer Interaction approaches in order to resolve the identified issues. Approach: In order to capture the users' requirements, a single in-depth case study of a multinational company was chosen in this research, in which the background, current conditions and environmental interactions were observed, recorded and analyzed for stages of patterns in relation to internal and external influences. Some techniques of blended data gathering which are interviews, naturalistic observation and studying user documentation were employed and then the prototype of CRM system was developed which incorporated User-Centered Design (UCD approach, Hierarchical Task Analysis (HTA, metaphor and identification of users' behaviors and characteristics. The implementation of these techniques, were then measured in terms of usability. Results: Based on the usability testing conducted, the results showed that most of the users agreed that the system is comfortable to work with by taking the quality attributes of learnability, memorizeablity, utility, sortability, font, visualization, user metaphor, information easy view and color as measurement parameters. Conclusions/Recommendations: By combining all these techniques, a comfort level for the users that leads to user satisfaction and higher usability degree can be achieved in a proposed CRM system. Thus, it is important that the companies should put usability quality attribute into a consideration before developing or procuring CRM system to ensure the implementation successfulness of the CRM system.
An evolutionary computation approach to examine functional brain plasticity
Directory of Open Access Journals (Sweden)
Arnab eRoy
2016-04-01
Full Text Available One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN and the executive control network (ECN during recovery from traumatic brain injury (TBI; the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in
Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef
2015-12-01
Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery.
An Automatic Approach to Detect Software Anomalies in Cloud Computing Using Pragmatic Bayes Approach
Directory of Open Access Journals (Sweden)
Nethaji V
2014-06-01
Full Text Available Software detection of anomalies is a vital element of operations in data centers and service clouds. Statistical Process Control (SPC cloud charts sense routine anomalies and their root causes are identified based on the differential profiling strategy. By automating the tasks, most of the manual overhead incurred in detecting the software anomalies and the analysis time are reduced to a larger extent but detailed analysis of profiling data are not performed in most of the cases. On the other hand, the cloud scheduler judges both the requirements of the user and the available infrastructure to equivalent their requirements. OpenStack prototype works on cloud trust management which provides the scheduler but complexity occurs when hosting the cloud system. At the same time, Trusted Computing Base (TCB of a computing node does not achieve the scalability measure. This unique paradigm brings about many software anomalies, which have not been well studied. This work, a Pragmatic Bayes approach studies the problem of detecting software anomalies and ensures scalability by comparing information at the current time to historical data. In particular, PB approach uses the two component Gaussian mixture to deviations at current time in cloud environment. The introduction of Gaussian mixture in PB approach achieves higher scalability measure which involves supervising massive number of cells and fast enough to be potentially useful in many streaming scenarios. Wherein previous works has been ensured for scheduling often lacks of scalability, this paper shows the superiority of the method using a Bayes per section error rate procedure through simulation, and provides the detailed analysis of profiling data in the marginal distributions using the Amazon EC2 dataset. Extensive performance analysis shows that the PB approach is highly efficient in terms of runtime, scalability, software anomaly detection ratio, CPU utilization, density rate, and computational
Khatami, F.; Weide, van der E.T.A.; Hoeijmakers, H.W.M.
2015-01-01
In this paper a numerical simulation of unsteady sheet cavitation is presented as it occurs on an NACA-0015 hydrofoil. The computational approach is based on the Euler equations for unsteady compressible flow, using an equilibrium cavitation model of Schnerr, Schmidt, and Saurel. It was found that f
Krishna Kumar, P; Araki, Tadashi; Rajan, Jeny; Saba, Luca; Lavra, Francesco; Ikeda, Nobutaka; Sharma, Aditya M; Shafique, Shoaib; Nicolaides, Andrew; Laird, John R; Gupta, Ajay; Suri, Jasjit S
2016-12-10
Monitoring of cerebrovascular diseases via carotid ultrasound has started to become a routine. The measurement of image-based lumen diameter (LD) or inter-adventitial diameter (IAD) is a promising approach for quantification of the degree of stenosis. The manual measurements of LD/IAD are not reliable, subjective and slow. The curvature associated with the vessels along with non-uniformity in the plaque growth poses further challenges. This study uses a novel and generalized approach for automated LD and IAD measurement based on a combination of spatial transformation and scale-space. In this iterative procedure, the scale-space is first used to get the lumen axis which is then used with spatial image transformation paradigm to get a transformed image. The scale-space is then reapplied to retrieve the lumen region and boundary in the transformed framework. Then, inverse transformation is applied to display the results in original image framework. Two hundred and two patients' left and right common carotid artery (404 carotid images) B-mode ultrasound images were retrospectively analyzed. The validation of our algorithm has done against the two manual expert tracings. The coefficient of correlation between the two manual tracings for LD was 0.98 (p < 0.0001) and 0.99 (p < 0.0001), respectively. The precision of merit between the manual expert tracings and the automated system was 97.7 and 98.7%, respectively. The experimental analysis demonstrated superior performance of the proposed method over conventional approaches. Several statistical tests demonstrated the stability and reliability of the automated system.
Ihrig, Arvid Conrad; Wieferink, Jürgen; Zhang, Igor Ying; Ropo, Matti; Ren, Xinguo; Rinke, Patrick; Scheffler, Matthias; Blum, Volker
2015-09-01
A key component in calculations of exchange and correlation energies is the Coulomb operator, which requires the evaluation of two-electron integrals. For localized basis sets, these four-center integrals are most efficiently evaluated with the resolution of identity (RI) technique, which expands basis-function products in an auxiliary basis. In this work we show the practical applicability of a localized RI-variant (‘RI-LVL’), which expands products of basis functions only in the subset of those auxiliary basis functions which are located at the same atoms as the basis functions. We demonstrate the accuracy of RI-LVL for Hartree-Fock calculations, for the PBE0 hybrid density functional, as well as for RPA and MP2 perturbation theory. Molecular test sets used include the S22 set of weakly interacting molecules, the G3 test set, as well as the G2-1 and BH76 test sets, and heavy elements including titanium dioxide, copper and gold clusters. Our RI-LVL implementation paves the way for linear-scaling RI-based hybrid functional calculations for large systems and for all-electron many-body perturbation theory with significantly reduced computational and memory cost.
Directory of Open Access Journals (Sweden)
Om Prakash Gurjar
2016-03-01
Full Text Available Purpose: Various factors cause geometric uncertainties during prostate radiotherapy, including interfractional and intrafractional patient motions, organ motion, and daily setup errors. This may lead to increased normal tissue complications when a high dose to the prostate is administered. More-accurate treatment delivery is possible with daily imaging and localization of the prostate. This study aims to measure the shift of the prostate by using kilovoltage (kV cone beam computed tomography (CBCT after position verification by kV orthogonal portal imaging (OPI.Methods: Position verification in 10 patients with prostate cancer was performed by using OPI followed by CBCT before treatment delivery in 25 sessions per patient. In each session, OPI was performed by using an on-board imaging (OBI system and pelvic bone-to-pelvic bone matching was performed. After applying the noted shift by using OPI, CBCT was performed by using the OBI system and prostate-to-prostate matching was performed. The isocenter shifts along all three translational directions in both techniques were combined into a three-dimensional (3-D iso-displacement vector (IDV.Results: The mean (SD IDV (in centimeters calculated during the 250 imaging sessions was 0.931 (0.598, median 0.825 for OPI and 0.515 (336, median 0.43 for CBCT, p-value was less than 0.0001 which shows extremely statistical significant difference.Conclusion: Even after bone-to-bone matching by using OPI, a significant shift in prostate was observed on CBCT. This study concludes that imaging with CBCT provides a more accurate prostate localization than the OPI technique. Hence, CBCT should be chosen as the preferred imaging technique.
A Near-Term Quantum Computing Approach for Hard Computational Problems in Space Exploration
Smelyanskiy, Vadim N; Knysh, Sergey I; Williams, Colin P; Johnson, Mark W; Thom, Murray C; Macready, William G; Pudenz, Kristen L
2012-01-01
In this article, we show how to map a sampling of the hardest artificial intelligence problems in space exploration onto equivalent Ising models that then can be attacked using quantum annealing implemented in D-Wave machine. We overview the existing results as well as propose new Ising model implementations for quantum annealing. We review supervised and unsupervised learning algorithms for classification and clustering with applications to feature identification and anomaly detection. We introduce algorithms for data fusion and image matching for remote sensing applications. We overview planning problems for space exploration mission applications and algorithms for diagnostics and recovery with applications to deep space missions. We describe combinatorial optimization algorithms for task assignment in the context of autonomous unmanned exploration. Finally, we discuss the ways to circumvent the limitation of the Ising mapping using a "blackbox" approach based on ideas from probabilistic computing. In this ...
Applying a cloud computing approach to storage architectures for spacecraft
Baldor, Sue A.; Quiroz, Carlos; Wood, Paul
As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.
Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection
Energy Technology Data Exchange (ETDEWEB)
Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A.
2016-05-25
With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.
Computer-aided interpretation approach for optical tomographic images
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.
2010-11-01
A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.
Neese, Frank; Wennmohs, Frank; Hansen, Andreas
2009-03-01
Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Møller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol-1. Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500
De Backer, A; van den Bos, K H W; Van den Broek, W; Sijbers, J; Van Aert, S
2016-12-01
An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license.
An Educational Approach to Computationally Modeling Dynamical Systems
Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl
2009-01-01
Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…
The Metacognitive Approach to Computer Education: Making Explicit the Learning Journey
Phelps, Renata
2007-01-01
This paper presents a theoretical and practical exploration of a metacognitive approach to computer education, developed through a three-year action research project. It is argued that the approach contrasts significantly with often-employed directive and competency-based approaches to computer education and is more appropriate in addressing the…
Gol Mohammadi, N.; Bandyszak, T.; Goldsteen, A.; Kalogiros, C.; Weyer, T.; Moffie, M.; Nasser, B.; Surridge, M
2015-01-01
The analysis of existing software evaluation techniques reveals the need for evidence-based evaluation of systems’ trustworthiness. This paper aims at evaluating trustworthiness of socio-technical systems during design-time. Our approach combines two existing evaluation techniques: a computa-tional approach and a risk management approach. The risk-based approach identifies threats to trustworthiness on an abstract level. Computational ap-proaches are applied to evaluate the expected end-to-en...
Human Computation An Integrated Approach to Learning from the Crowd
Law, Edith
2011-01-01
Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy
A Monomial Chaos Approach for Efficient Uncertainty Quantification in Computational Fluid Dynamics
Witteveen, J.A.S.; Bijl, H.
2006-01-01
A monomial chaos approach is proposed for efficient uncertainty quantification in nonlinear computational problems. Propagating uncertainty through nonlinear equations can still be computationally intensive for existing uncertainty quantification methods. It usually results in a set of nonlinear equ
Approaching the Computational Color Constancy as a Classification Problem through Deep Learning
Oh, Seoung Wug; Kim, Seon Joo
2016-01-01
Computational color constancy refers to the problem of computing the illuminant color so that the images of a scene under varying illumination can be normalized to an image under the canonical illumination. In this paper, we adopt a deep learning framework for the illumination estimation problem. The proposed method works under the assumption of uniform illumination over the scene and aims for the accurate illuminant color computation. Specifically, we trained the convolutional neural network...
Energy Technology Data Exchange (ETDEWEB)
Jitaru, Petru, E-mail: Petru.Jitaru@lne.fr [Laboratoire National de Metrologie et d' Essais (LNE), Department of Biomedical and Inorganic Chemistry, 1 rue Gaston Boissier, 75015 Paris (France); Goenaga-Infante, Heidi [LGC Limited, Queens Road, Teddington, TW11 OLY, Middlesex (United Kingdom); Vaslin-Reimann, Sophie; Fisicaro, Paola [Laboratoire National de Metrologie et d' Essais (LNE), Department of Biomedical and Inorganic Chemistry, 1 rue Gaston Boissier, 75015 Paris (France)
2010-01-11
In this paper, two different methods are for the first time systematically compared for the determination of selenium in human serum selenoalbumin (SeAlb). Firstly, SeAlb was enzymatically hydrolyzed and the resulting selenomethionine (SeMet) was quantified using species-specific isotope dilution (SSID) with reversed phase-HPLC (RP-HPLC) hyphenated to (collision/reaction cell) inductively coupled plasma-quadrupole mass spectrometry (CRC ICP-QMS). In order to assess the enzymatic hydrolysis yield, SeAlb was determined as an intact protein by affinity-HPLC (AF-HPLC) coupled to CRC ICP-QMS. Using this approach, glutathione peroxidase (GPx) and selenoprotein P (SelP) (the two selenoproteins present in serum) were also determined within the same chromatographic run. The levels of selenium associated with SeAlb in three serum materials, namely BCR-637, Seronorm level 1 and Seronorm level 2, obtained using both methods were in a good agreement. Verification of the absence of free SeMet, which interferes with the SeAlb determination (down to the amino acid level), in such materials was addressed by analyzing the fraction of GPx, partially purified by AF-HPLC, using RP-HPLC (GPx only) and size exclusion-HPLC (SE-HPLC) coupled to CRC ICP-QMS. The latter methodology was also used for the investigation of the presence of selenium species other than the selenoproteins in the (AF-HPLC) SelP and SeAlb fractions; the same selenium peaks were detected in both control and BCR-637 serum with a difference in age of ca. 12 years. It is also for the first time that the concentrations of selenium associated with SeAlb, GPx and SelP species in such commercially available serums (only certified or having indicative levels of total selenium content) are reported. Such indicative values can be used for reference purposes in future validation of speciation methods for selenium in human serum and/or inter-laboratory comparisons.
Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.
2016-02-01
In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.
A computational intelligence approach to the Mars Precision Landing problem
Birge, Brian Kent, III
Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over
A computational toy model for shallow landslides: Molecular dynamics approach
Martelloni, Gianluca; Bagnoli, Franco; Massaro, Emanuele
2013-09-01
The aim of this paper is to propose a 2D computational algorithm for modeling the triggering and propagation of shallow landslides caused by rainfall. We used a molecular dynamics (MD) approach, similar to the discrete element method (DEM), that is suitable to model granular material and to observe the trajectory of a single particle, so to possibly identify its dynamical properties. We consider that the triggering of shallow landslides is caused by the decrease of the static friction along the sliding surface due to water infiltration by rainfall. Thence the triggering is caused by the two following conditions: (a) a threshold speed of the particles and (b) a condition on the static friction, between the particles and the slope surface, based on the Mohr-Coulomb failure criterion. The latter static condition is used in the geotechnical model to estimate the possibility of landslide triggering. The interaction force between particles is modeled, in the absence of experimental data, by means of a potential similar to the Lennard-Jones one. The viscosity is also introduced in the model and for a large range of values of the model's parameters, we observe a characteristic velocity pattern, with acceleration increments, typical of real landslides. The results of simulations are quite promising: the energy and time triggering distribution of local avalanches show a power law distribution, analogous to the observed Gutenberg-Richter and Omori power law distributions for earthquakes. Finally, it is possible to apply the method of the inverse surface displacement velocity [4] for predicting the failure time.
Energy Technology Data Exchange (ETDEWEB)
Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)
2007-10-15
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Riviere, Jim E.; Scoglio, Caterina; Sahneh, Faryad D.; Monteiro-Riviere, Nancy A.
2013-01-01
The field of nanomaterial pharmacokinetics is in its infancy, with major advances largely restricted by a lack of biologically relevant metrics, fundamental differences between particles and small molecules of organic chemicals and drugs relative to biological processes involved in disposition, a scarcity of sufficiently rich and characterized in vivo data and a lack of computational approaches to integrating nanomaterial properties to biological endpoints. A central concept that links nanomaterial properties to biological disposition, in addition to their colloidal properties, is the tendency to form a biocorona which modulates biological interactions including cellular uptake and biodistribution. Pharmacokinetic models must take this crucial process into consideration to accurately predict in vivo disposition, especially when extrapolating from laboratory animals to humans since allometric principles may not be applicable. The dynamics of corona formation, which modulates biological interactions including cellular uptake and biodistribution, is thereby a crucial process involved in the rate and extent of biodisposition. The challenge will be to develop a quantitative metric that characterizes a nanoparticle's surface adsorption forces that are important for predicting biocorona dynamics. These types of integrative quantitative approaches discussed in this paper for the dynamics of corona formation must be developed before realistic engineered nanomaterial risk assessment can be accomplished.
Investigation of Stent Implant Mechanics Using Linear Analytical and Computational Approach.
Yang, Hua; Fortier, Aleksandra; Horne, Kyle; Mohammad, Atif; Banerjee, Subhash; Han, Hai-Chao
2017-03-01
Stent implants are essential in restoring normal blood flow in atherosclerotic arteries. Recent studies have shown high failure rates of stent implants in superficial femoral artery (SFA) as a result of dynamic loading environment imposed on the stent implants by the diseased arterial wall and turbulent blood flow. There are variety of stent designs and materials currently on the market however, there is no clear understanding if specific stent design is suitable with the material that is manufactured from and if this combination can sustain the life-cycle that the stent implants need to undergo once inside the artery. Lack of studies have been presented that relate stent mechanical properties with stent geometry and material used. This study presents linear theoretical and computational modeling approach that determines stent mechanical properties with effective stiffness of the deployed stent. Effective stiffness of the stent has been accurately derived based on stent structure design and loading in axial and radial directions. A rhombus stent structure was selected for this study due to its more common use and produced by main stream manufacturers. The derived theoretical model was validated using numerical finite element modeling approach. Results from this study can lead to preliminary insight towards understanding of stent deformation based on stent geometry, material properties and artery wall pressure; and how to carefully match stent's geometry with suitable material for long life cycle, increased strength, and reliable performance of stent implants.
Computer Science Contests for Secondary School Students: Approaches to Classification
Directory of Open Access Journals (Sweden)
Wolfgang POHL
2006-04-01
Full Text Available The International Olympiad in Informatics currently provides a model which is imitated by the majority of contests for secondary school students in Informatics or Computer Science. However, the IOI model can be criticized, and alternative contest models exist. To support the discussion about contests in Computer Science, several dimensions for characterizing and classifying contests are suggested.
Gesture Recognition by Computer Vision: An Integral Approach
Lichtenauer, J.F.
2009-01-01
The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads
Development of Computer Science Disciplines - A Social Network Analysis Approach
Pham, Manh Cuong; Jarke, Matthias
2011-01-01
In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and workshop proceedings. That results in an imprecise and incomplete analysis of the computer science knowledge. This paper presents an analysis on the computer science knowledge network constructed from all types of publications, aiming at providing a complete view of computer science research. Based on the combination of two important digital libraries (DBLP and CiteSeerX), we study the knowledge network created at journal/conference level using citation linkage, to identify the development of sub-disciplines. We investiga...
Heinz, Hendrik
2014-06-18
Adsorption of biomolecules and polymers to inorganic nanostructures plays a major role in the design of novel materials and therapeutics. The behavior of flexible molecules on solid surfaces at a scale of 1-1000 nm remains difficult and expensive to monitor using current laboratory techniques, while playing a critical role in energy conversion and composite materials as well as in understanding the origin of diseases. Approaches to implement key surface features and pH in molecular models of solids are explained, and distinct mechanisms of peptide recognition on metal nanostructures, silica and apatite surfaces in solution are described as illustrative examples. The influence of surface energies, specific surface features and protonation states on the structure of aqueous interfaces and selective biomolecular adsorption is found to be critical, comparable to the well-known influence of the charge state and pH of proteins and surfactants on their conformations and assembly. The representation of such details in molecular models according to experimental data and available chemical knowledge enables accurate simulations of unknown complex interfaces in atomic resolution in quantitative agreement with independent experimental measurements. In this context, the benefits of a uniform force field for all material classes and of a mineral surface structure database are discussed.
Development of a computationally efficient urban flood modelling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Ntegeka, Victor; Murla, Damian
the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...
Loss tolerant one-way quantum computation -- a horticultural approach
Varnava, M; Rudolph, T; Varnava, Michael; Browne, Daniel E.; Rudolph, Terry
2005-01-01
We introduce a scheme for fault tolerantly dealing with losses in cluster state computation that can tolerate up to 50% qubit loss. This is achieved passively - no coherent measurements or coherent correction is required. We then use this procedure within a specific linear optical quantum computation proposal to show that: (i) given perfect sources, detector inefficiencies of up to 50% can be tolerated and (ii) given perfect detectors, the purity of the photon source (overlap of the photonic wavefunction with the desired single mode) need only be greater than 66.6% for efficient computation to be possible.
An introduction to statistical computing a simulation-based approach
Voss, Jochen
2014-01-01
A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems. Sampling-based simulation techniques are now an invaluable tool for exploring statistical models. This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods. It also includes some advanced met
Methodical Approaches to Teaching of Computer Modeling in Computer Science Course
Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina
2015-01-01
The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…
Reflections on John Monaghan's "Computer Algebra, Instrumentation, and the Anthropological Approach"
Blume, Glen
2007-01-01
Reactions to John Monaghan's "Computer Algebra, Instrumentation and the Anthropological Approach" focus on a variety of issues related to the ergonomic approach (instrumentation) and anthropological approach to mathematical activity and practice. These include uses of the term technique; several possibilities for integration of the two approaches;…
Energy Technology Data Exchange (ETDEWEB)
Wolverton, Christopher [Northwestern Univ., Evanston, IL (United States). Dept. of Materials Science and Engineering; Ozolins, Vidvuds [Univ. of California, Los Angeles, CA (United States). Dept. of Materials Science and Engineering; Kung, Harold H. [Northwestern Univ., Evanston, IL (United States). Dept. of Chemical and Biological Engineering; Yang, Jun [Ford Scientific Research Lab., Dearborn, MI (United States); Hwang, Sonjong [California Inst. of Technology (CalTech), Pasadena, CA (United States). Dept. of Chemistry and Chemical Engineering; Shore, Sheldon [The Ohio State Univ., Columbus, OH (United States). Dept. of Chemistry and Biochemistry
2016-11-28
The objective of the proposed program is to discover novel mixed hydrides for hydrogen storage, which enable the DOE 2010 system-level goals. Our goal is to find a material that desorbs 8.5 wt.% H_{2} or more at temperatures below 85°C. The research program will combine first-principles calculations of reaction thermodynamics and kinetics with material and catalyst synthesis, testing, and characterization. We will combine materials from distinct categories (e.g., chemical and complex hydrides) to form novel multicomponent reactions. Systems to be studied include mixtures of complex hydrides and chemical hydrides [e.g. LiNH^{2+}NH_{3}BH_{3}] and nitrogen-hydrogen based borohydrides [e.g. Al(BH_{4})_{3}(NH_{3})_{3}]. The 2010 and 2015 FreedomCAR/DOE targets for hydrogen storage systems are very challenging, and cannot be met with existing materials. The vast majority of the work to date has delineated materials into various classes, e.g., complex and metal hydrides, chemical hydrides, and sorbents. However, very recent studies indicate that mixtures of storage materials, particularly mixtures between various classes, hold promise to achieve technological attributes that materials within an individual class cannot reach. Our project involves a systematic, rational approach to designing novel multicomponent mixtures of materials with fast hydrogenation/dehydrogenation kinetics and favorable thermodynamics using a combination of state-of-the-art scientific computing and experimentation. We will use the accurate predictive power of first-principles modeling to understand the thermodynamic and microscopic kinetic processes involved in hydrogen release and uptake and to design new material/catalyst systems with improved properties. Detailed characterization and atomic-scale catalysis experiments will elucidate the effect of dopants and nanoscale catalysts in achieving fast kinetics and reversibility. And
Match and Move, an Approach to Data Parallel Computing
1992-10-01
Blelloch, Siddhartha Chatterjee, Jay Sippelstein, and Marco Zagha. CVL: a C Vector Library. School of Computer Science, Carnegie Mellon University...CBZ90] Siddhartha Chatterjee, Guy E. Blelloch, and Marco Zagha. Scan primitives for vector computers. In Proceedings Supercomputing , November 1990...Cha91] Siddhartha Chatterjee. Compiling data-parallel programs for efficient execution on shared-memory multiprocessors. PhD thesis, Carnegie Mellon
AVES: A Computer Cluster System approach for INTEGRAL Scientific Analysis
Federici, M.; Martino, B. L.; Natalucci, L.; Umbertini, P.
The AVES computing system, based on an "Cluster" architecture is a fully integrated, low cost computing facility dedicated to the archiving and analysis of the INTEGRAL data. AVES is a modular system that uses the software resource manager (SLURM) and allows almost unlimited expandibility (65,536 nodes and hundreds of thousands of processors); actually is composed by 30 Personal Computers with Quad-Cores CPU able to reach the computing power of 300 Giga Flops (300x10{9} Floating point Operations Per Second), with 120 GB of RAM and 7.5 Tera Bytes (TB) of storage memory in UFS configuration plus 6 TB for users area. AVES was designed and built to solve growing problems raised from the analysis of the large data amount accumulated by the INTEGRAL mission (actually about 9 TB) and due to increase every year. The used analysis software is the OSA package, distributed by the ISDC in Geneva. This is a very complex package consisting of dozens of programs that can not be converted to parallel computing. To overcome this limitation we developed a series of programs to distribute the workload analysis on the various nodes making AVES automatically divide the analysis in N jobs sent to N cores. This solution thus produces a result similar to that obtained by the parallel computing configuration. In support of this we have developed tools that allow a flexible use of the scientific software and quality control of on-line data storing. The AVES software package is constituted by about 50 specific programs. Thus the whole computing time, compared to that provided by a Personal Computer with single processor, has been enhanced up to a factor 70.
Parallel MMF: a Multiresolution Approach to Matrix Computation
Kondor, Risi; Teneva, Nedelina; Mudrakarta, Pramod K.
2015-01-01
Multiresolution Matrix Factorization (MMF) was recently introduced as a method for finding multiscale structure and defining wavelets on graphs/matrices. In this paper we derive pMMF, a parallel algorithm for computing the MMF factorization. Empirically, the running time of pMMF scales linearly in the dimension for sparse matrices. We argue that this makes pMMF a valuable new computational primitive in its own right, and present experiments on using pMMF for two distinct purposes: compressing...
Computational challenges of structure-based approaches applied to HIV.
Forli, Stefano; Olson, Arthur J
2015-01-01
Here, we review some of the opportunities and challenges that we face in computational modeling of HIV therapeutic targets and structural biology, both in terms of methodology development and structure-based drug design (SBDD). Computational methods have provided fundamental support to HIV research since the initial structural studies, helping to unravel details of HIV biology. Computational models have proved to be a powerful tool to analyze and understand the impact of mutations and to overcome their structural and functional influence in drug resistance. With the availability of structural data, in silico experiments have been instrumental in exploiting and improving interactions between drugs and viral targets, such as HIV protease, reverse transcriptase, and integrase. Issues such as viral target dynamics and mutational variability, as well as the role of water and estimates of binding free energy in characterizing ligand interactions, are areas of active computational research. Ever-increasing computational resources and theoretical and algorithmic advances have played a significant role in progress to date, and we envision a continually expanding role for computational methods in our understanding of HIV biology and SBDD in the future.
Directory of Open Access Journals (Sweden)
S. Sofana Reka
2016-09-01
Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.
Challenges and possible approaches: towards the petaflops computers
Institute of Scientific and Technical Information of China (English)
Depei QIAN; Danfeng ZHU
2009-01-01
In parallel with the R&D efforts in USA and Eu-rope, China's National High-tech R&D program has setup its goal in developing petaflops computers. Researchers and engineers world-wide are looking for appropriate methods and technologies to achieve the petaflops computer system. Based on discussion on important design issues in devel-oping the petafiops computer, this paper raises the major technological challenges including the memory wall, low power system design, interconnects, and programming sup-port, etc. Current efforts in addressing some of these chal-lenges and in pursuing possible solutions for developing the petaflops systems are presented. Several existing systems are briefly introduced as examples, including Roadrunner, Cray XT5 jaguar, Dawning 5000A/6000, and Lenovo DeepComp 7000. Architectures proposed by Chinese researchers for im-plementing the petaflops computer are also introduced. Ad-vantages of the architecture as well as the difficulties in its implementation are discussed. Finally, future research direc-tion in development of high productivity computing systems is discussed.
Computer Mediated Learning: An Example of an Approach.
Arcavi, Abraham; Hadas, Nurit
2000-01-01
There are several possible approaches in which dynamic computerized environments play a significant and possibly unique role in supporting innovative learning trajectories in mathematics in general and geometry in particular. Describes an approach based on a problem situation and some experiences using it with students and teachers. (Contains 15…
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can be a...
The use of computational approaches in inhaler development.
Wong, William; Fletcher, David F; Traini, Daniela; Chan, Hak-Kim; Young, Paul M
2012-03-30
Computational Fluid Dynamics (CFD) and Discrete Element Modelling (DEM) studies relevant to inhaled drug delivery are reviewed. CFD is widely used in device design to determine airflow patterns and turbulence levels. CFD is also used to simulate particles and droplets, which are subjected to various forces, turbulence and wall interactions. These studies can now be performed routinely because of the availability of commercial software containing high quality turbulence and particle models. DEM allows for the modelling of agglomerate break-up upon interaction with a wall or due to shear in the flow. However, the computational cost is high and the number of particles that can be simulated is minimal compared with the number present in typical inhaled formulations. Therefore DEM is currently limited to fundamental studies of break-up mechanisms. With decreasing computational limitations, simulations combining CFD and DEM that can address outstanding issues in agglomerate break-up and dispersion will be possible.
Hu, X.; Zhang, Y.
2007-05-01
The Weather Research and Forecast/Chemistry Model (WRF/Chem) that simulates chemistry simultaneously with meteorology has recently been developed for real-time forecasting by the U.S. National Center for Atmospheric Research (NCAR) and National Oceanic & Atmospheric Administration (NOAA). As one of the six air quality models, WRF/Chem with a modal aerosol module has been applied for ozone and PM2.5 ensemble forecasts over eastern North America as part of the 2004 New England Air Quality Study (NEAQS) program (NEAQS-2004). Significant differences exist in the partitioning of volatile species (e.g., ammonium and nitrate) simulated by the six models. Model biases are partially attributed to the equilibrium assumption used in the gas/particles mass transfer approach in some models. Development of a more accurate, yet computationally- efficient gas/particle mass transfer approach for three-dimensional (3-D) applications, in particular, real-time forecasting, is therefore warranted. Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) has been implemented into WRF/Chem (referred to as WRF/Chem-MADRID). WRF/Chem-MADRID offers three gas/particle partitioning treatments: equilibrium, kinetic, and hybrid approaches. The equilibrium approach is computationally-efficient and commonly used in 3-D air quality models but less accurate under certain conditions (e.g., in the presence of coarse, reactive particles such as PM containing sea-salts in the coastal areas). The kinetic approach is accurate but computationally-expensive, limiting its 3-D applications. The hybrid approach attempts to provide a compromise between merits and drawbacks of the two approaches by treating fine PM (typically MADRID has recently been developed for 3-D applications based on an Analytical Predictor of Condensation (referred to as kinetic/APC). In this study, WRF/Chem-MADRID with the kinetic/APC approach will be further evaluated along with the equilibrium and hybrid approaches
Energy Technology Data Exchange (ETDEWEB)
Ko, Soon Heum [Linkoeping University, Linkoeping (Sweden); Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel [Louisiana State University, Baton Rouge (United States); Jha, Shantenu [Rutgers University, Piscataway (United States)
2014-01-15
Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.
Simulation of quantum computation : A deterministic event-based approach
Michielsen, K; De Raedt, K; De Raedt, H
2005-01-01
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
A New Approach: Computer-Assisted Problem-Solving Systems
Gok, Tolga
2010-01-01
Computer-assisted problem solving systems are rapidly growing in educational use and with the advent of the Internet. These systems allow students to do their homework and solve problems online with the help of programs like Blackboard, WebAssign and LON-CAPA program etc. There are benefits and drawbacks of these systems. In this study, the…
R for cloud computing an approach for data scientists
Ohri, A
2014-01-01
R for Cloud Computing looks at some of the tasks performed by business analysts on the desktop (PC era) and helps the user navigate the wealth of information in R and its 4000 packages as well as transition the same analytics using the cloud. With this information the reader can select both cloud vendors and the sometimes confusing cloud ecosystem as well as the R packages that can help process the analytical tasks with minimum effort and cost, and maximum usefulness and customization. The use of Graphical User Interfaces (GUI) and Step by Step screenshot tutorials is emphasized in this book to lessen the famous learning curve in learning R and some of the needless confusion created in cloud computing that hinders its widespread adoption. This will help you kick-start analytics on the cloud including chapters on cloud computing, R, common tasks performed in analytics, scrutiny of big data analytics, and setting up and navigating cloud providers. Readers are exposed to a breadth of cloud computing ch...
A Computational Approach to the Quantification of Animal Camouflage
2014-06-01
their visual backgrounds (Hanlon and Messenger 1996; Messenger 2001b) – a trait noted in Aristotle’s Historia Animalium (Aristotle 1910), and grossly...Computation in Neural Systems 14 (2):321-333 Aristotle (1910) Historia Animalium (trans: Thompson DAW). Clarendon Press, Oxford Arnold SE, Faruq S
Exploring polymorphism in molecular crystals with a computational approach
Ende, J.A. van den
2016-01-01
Different crystal structures can possess different properties and therefore the control of polymorphism in molecular crystals is a goal in multiple industries, e.g. the pharmaceutical industry. Part I of this thesis is a computational study at the molecular scale of a particular solid-solid polymorp
Statistical Learning of Phonetic Categories: Insights from a Computational Approach
McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.
2009-01-01
Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…
Linguistics, Computers, and the Language Teacher. A Communicative Approach.
Underwood, John H.
This analysis of the state of the art of computer programs and programming for language teaching has two parts. In the first part, an overview of the theory and practice of language teaching, Noam Chomsky's view of language, and the implications and problems of generative theory are presented. The theory behind the input model of language…
Nested Transactions: An Approach to Reliable Distributed Computing.
1981-04-01
Undoubtedly such universal use of computers and rapid exchange of information will have a dramatic impact: social , economic, and political. Distributed...level tiansaction, these committed inferiors are SLJ C.e’,ssfulI inferiors of the top-level transaction, too. Therefore q will indeed get a commint
Zajenkowski, Marcin; Styla, Rafal; Szymanik, Jakub
2011-01-01
We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only…
Energy Technology Data Exchange (ETDEWEB)
James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew
2013-10-01
The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.
Design of new and potent diethyl thiobarbiturates as urease inhibitors: a computational approach.
Wadood, Abdul; Riaz, Muhammad; Mulk, Amir Ul; Khan, Momin; Haleem, Sobia Ahsan; Shams, Sulaiman; Gul, Sahib; Ahmed, Ayaz; Qasim, Muhammad; Ali, Farman; Ul-Haq, Zaheer
2014-01-01
Urease is an important enzyme both in agriculture and medicine research. Strategies based on urease inhibition is critically considered as the first line treatment of infections caused by urease producing bacteria. Since, urease possess agro-chemical and medicinal importance, thus, it is necessary to search for the novel compounds capable of inhibiting this enzyme. Several computational methods were employed to design novel and potent urease inhibitors in this work. First docking simulations of known compounds consists of a set of arylidine barbiturates (termed as reference) were performed on the Bacillus pasteurii (BP) urease. Subsequently, two fold strategies were used to design new compounds against urease. Stage 1 comprised of the energy minimization of enzyme-ligand complexes of reference compounds and the accurate prediction of the molecular mechanics generalized born (MMGB) interaction energies. In the second stage, new urease inhibitors were then designed by the substitution of different groups consecutively in the aryl ring of the thiobarbiturates and N, N-diethyl thiobarbiturates of the reference ligands.. The enzyme-ligand complexes with lowest interaction energies or energies close to the calculated interaction energies of the reference molecules, were selected for the consequent chemical manipulation. This was followed by the substitution of different groups on the 2 and 5 positions of the aryl ring. As a result, several new and potent diethyl thiobarbiturates were predicted as urease inhibitors. This approach reflects a logical progression for early stage drug discovery that can be exploited to successfully identify potential drug candidates.
Computing pKa Values with a Mixing Hamiltonian Quantum Mechanical/Molecular Mechanical Approach.
Liu, Yang; Fan, Xiaoli; Jin, Yingdi; Hu, Xiangqian; Hu, Hao
2013-09-10
Accurate computation of the pKa value of a compound in solution is important but challenging. Here, a new mixing quantum mechanical/molecular mechanical (QM/MM) Hamiltonian method is developed to simulate the free-energy change associated with the protonation/deprotonation processes in solution. The mixing Hamiltonian method is designed for efficient quantum mechanical free-energy simulations by alchemically varying the nuclear potential, i.e., the nuclear charge of the transforming nucleus. In pKa calculation, the charge on the proton is varied in fraction between 0 and 1, corresponding to the fully deprotonated and protonated states, respectively. Inspired by the mixing potential QM/MM free energy simulation method developed previously [H. Hu and W. T. Yang, J. Chem. Phys. 2005, 123, 041102], this method succeeds many advantages of a large class of λ-coupled free-energy simulation methods and the linear combination of atomic potential approach. Theory and technique details of this method, along with the calculation results of the pKa of methanol and methanethiol molecules in aqueous solution, are reported. The results show satisfactory agreement with the experimental data.
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
Computational Model of Music Sight Reading: A Reinforcement Learning Approach
Yahya, Keyvan
2010-01-01
Although the Music Sight Reading process usually has been studied from the cognitive or neurological view points, but the computational learning methods like the Reinforcement Learning have not yet been used to modeling of such processes. In this paper with regards to essential properties of our specific problem, we consider the value function concept and will indicate that the optimum policy can be obtained by the method we offer without to be getting involved with computing of the complex value functions which are in most of cases inexact. Also, the algorithm we will offer here is somehow a PDE based algorithm which is associated with a stochastic optimization programming and we consider that in this case, this one is more applicable than the normative algorithms like temporal difference method.
Distance Based Asynchronous Recovery Approach In Mobile Computing Environment
Directory of Open Access Journals (Sweden)
Yogita Khatri,
2012-06-01
Full Text Available A mobile computing system is a distributed system in which at least one of the processes is mobile. They are constrained by lack of stable storage, low network bandwidth, mobility, frequent disconnection andlimited battery life. Checkpointing is one of the commonly used techniques to provide fault tolerance in mobile computing environment. In order to suit the mobile environment a distance based recovery schemeis proposed which is based on checkpointing and message logging. After the system recovers from failures, only the failed processes rollback and restart from their respective recent checkpoints, independent of the others. The salient feature of this scheme is to reduce the transfer and recovery cost. While the mobile host moves with in a specific range, recovery information is not moved and thus only be transferred nearby if the mobile host moves out of certain range.
TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH
Energy Technology Data Exchange (ETDEWEB)
Lee, Hsien-Hsin S
2010-05-11
The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.
Computational Approaches for Probing the Formation of Atmospheric Molecular Clusters
DEFF Research Database (Denmark)
Elm, Jonas
This thesis presents the investigation of atmospheric molecular clusters using computational methods. Previous investigations have focused on solving problems related to atmospheric nucleation, and have not been targeted at the performance of the applied methods. This thesis focuses on assessing...... the performance of computational strategies in order to identify a sturdy methodology, which should be applicable for handling various issues related to atmospheric cluster formation. Density functional theory (DFT) is applied to study individual cluster formation steps. Utilizing large test sets of numerous...... and pinic acid) for atmospheric cluster formation. Glycine is found to have a similar potential as ammonia in enhancing atmospheric nucleation. Pinic acid molecules form favourable clusters with sulfuric acid, but with formation free energies which are too low to explain observed nucleation rates. Pinic...
Analysis of diabetic retinopathy biomarker VEGF gene by computational approaches
Jayashree Sadasivam; Ramesh, N.; K. Vijayalakshmi; Vinni Viridi; Shiva prasad
2012-01-01
Diabetic retinopathy, the most common diabetic eye disease, is caused by changes in the blood vessels of the retina which remains the major cause. It is characterized by vascular permeability and increased tissue ischemia and angiogenesis. One of the biomarker for Diabetic retinopathy has been identified as Vascular Endothelial Growth Factor ( VEGF )gene by computational analysis. VEGF is a sub-family of growth factors, the platelet-derived growth factor family of cystine-knot growth factors...
Modeling Cu2+-Aβ complexes from computational approaches
Directory of Open Access Journals (Sweden)
Jorge Alí-Torres
2015-09-01
Full Text Available Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD, in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS. A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.
Modeling Cu2+-Aβ complexes from computational approaches
Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona
2015-09-01
Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.
A Computer Vision Approach to Object Tracking and Counting
Directory of Open Access Journals (Sweden)
Sergiu Mezei
2010-09-01
Full Text Available This paper, introduces a new method for counting people or more generally objects that enter or exit a certain area/building or perimeter. We propose an algorithm (method that analyzes a video sequence, detects moving objects and their moving direction and filters them according to some criteria (ex only humans. As result one obtains in and out counters for objects passing the defined perimeter. Automatic object counting is a growing size application in many industry/commerce areas. Counting can be used in statistical analysis and optimal activity scheduling methods. One of the main applications is the approximation of the number of persons passing trough, or reaching a certain area: airports (customs, shopping centers and malls and sports or cultural activities with high attendance. The main purpose is to offer an accurate estimation while still keeping the anonymity of the visitors.
Pancreatic trauma: The role of computed tomography for guiding therapeutic approach
Institute of Scientific and Technical Information of China (English)
Marco; Moschetta; Michele; Telegrafo; Valeria; Malagnino; Laura; Mappa; Amato; A; Stabile; Ianora; Dario; Dabbicco; Antonio; Margari; Giuseppe; Angelelli
2015-01-01
AIM: To evaluate the role of computed tomography(CT) for diagnosing traumatic injuries of the pancreas and guiding the therapeutic approach.METHODS: CT exams of 6740 patients admitted to our Emergency Department between May 2005 and January 2013 for abdominal trauma were retrospectively evaluated. Patients were identified through a search of our electronic archive system by using such terms as "pancreatic injury", "pancreatic contusion", "pancreatic laceration", "peri-pancreatic fluid", "pancreatic active bleeding". All CT examinations were performed before and after the intravenous injection of contrast material using a 16-slice multidetector row computed tomography scanner. The data sets were retrospectively analyzed by two radiologists in consensus searching for specific signs of pancreatic injury(parenchymal fracture and laceration, focal or diffuse pancreatic enlargement/edema, pancreatic hematoma, active bleeding, fluid between splenic vein and pancreas) and non-specific signs(inflammatory changes in peri-pancreatic fat and mesentery, fluid surrounding the superior mesentericartery, thickening of the left anterior renal fascia, pancreatic ductal dilatation, acute pseudocyst formation/peri-pancreatic fluid collection, fluid in the anterior and posterior pararenal spaces, fluid in transverse mesocolon and lesser sac, hemorrhage into peri-pancreatic fat, mesocolon and mesentery, extraperitoneal fluid, intraperitoneal fluid).RESULTS: One hundred and thirty-six/Six thousand seven hundred and forty(2%) patients showed CT signs of pancreatic trauma. Eight/one hundred and thirty-six(6%) patients underwent surgical treatment and the pancreatic injures were confirmed in all cases. Only in 6/8 patients treated with surgical approach, pancreatic duct damage was suggested in the radiological reports and surgically confirmed in all cases. In 128/136(94%) patients who underwent non-operative treatment CT images showed pancreatic edema in 97 patients, hematoma in 31 patients
Computational morphology a computational geometric approach to the analysis of form
Toussaint, GT
1988-01-01
Computational Geometry is a new discipline of computer science that deals with the design and analysis of algorithms for solving geometric problems. There are many areas of study in different disciplines which, while being of a geometric nature, have as their main component the extraction of a description of the shape or form of the input data. This notion is more imprecise and subjective than pure geometry. Such fields include cluster analysis in statistics, computer vision and pattern recognition, and the measurement of form and form-change in such areas as stereology and developmental biolo
Computational Approach to Diarylprolinol-Silyl Ethers in Aminocatalysis.
Halskov, Kim Søholm; Donslund, Bjarke S; Paz, Bruno Matos; Jørgensen, Karl Anker
2016-05-17
Asymmetric organocatalysis has witnessed a remarkable development since its "re-birth" in the beginning of the millenium. In this rapidly growing field, computational investigations have proven to be an important contribution for the elucidation of mechanisms and rationalizations of the stereochemical outcomes of many of the reaction concepts developed. The improved understanding of mechanistic details has facilitated the further advancement of the field. The diarylprolinol-silyl ethers have since their introduction been one of the most applied catalysts in asymmetric aminocatalysis due to their robustness and generality. Although aminocatalytic methods at first glance appear to follow relatively simple mechanistic principles, more comprehensive computational studies have shown that this notion in some cases is deceiving and that more complex pathways might be operating. In this Account, the application of density functional theory (DFT) and other computational methods on systems catalyzed by the diarylprolinol-silyl ethers is described. It will be illustrated how computational investigations have shed light on the structure and reactivity of important intermediates in aminocatalysis, such as enamines and iminium ions formed from aldehydes and α,β-unsaturated aldehydes, respectively. Enamine and iminium ion catalysis can be classified as HOMO-raising and LUMO-lowering activation modes. In these systems, the exclusive reactivity through one of the possible intermediates is often a requisite for achieving high stereoselectivity; therefore, the appreciation of subtle energy differences has been vital for the efficient development of new stereoselective reactions. The diarylprolinol-silyl ethers have also allowed for novel activation modes for unsaturated aldehydes, which have opened up avenues for the development of new remote functionalization reactions of poly-unsaturated carbonyl compounds via di-, tri-, and tetraenamine intermediates and vinylogous iminium ions
Computational approach for calculating bound states in quantum field theory
Lv, Q. Z.; Norris, S.; Brennan, R.; Stefanovich, E.; Su, Q.; Grobe, R.
2016-09-01
We propose a nonperturbative approach to calculate bound-state energies and wave functions for quantum field theoretical models. It is based on the direct diagonalization of the corresponding quantum field theoretical Hamiltonian in an effectively discretized and truncated Hilbert space. We illustrate this approach for a Yukawa-like interaction between fermions and bosons in one spatial dimension and show where it agrees with the traditional method based on the potential picture and where it deviates due to recoil and radiative corrections. This method permits us also to obtain some insight into the spatial characteristics of the distribution of the fermions in the ground state, such as the bremsstrahlung-induced widening.
Workflow Scheduling in Grid Computing Environment using a Hybrid GAACO Approach
Sathish, Kuppani; RamaMohan Reddy, A.
2016-06-01
In recent trends, grid computing is one of the emerging areas in computing platform which supports parallel and distributed environments. The main problem for grid computing is scheduling of workflows in terms of user specifications is a stimulating task and it also impacts the performance. This paper proposes a hybrid GAACO approach, which is a combination of Genetic Algorithm and Ant Colony Optimization Algorithm. The GAACO approach proposes different types of scheduling heuristics for the grid environment. The main objective of this approach is to satisfy all the defined constraints and user parameters.
Computational approaches to metabolic engineering utilizing systems biology and synthetic biology.
Fong, Stephen S
2014-08-01
Metabolic engineering modifies cellular function to address various biochemical applications. Underlying metabolic engineering efforts are a host of tools and knowledge that are integrated to enable successful outcomes. Concurrent development of computational and experimental tools has enabled different approaches to metabolic engineering. One approach is to leverage knowledge and computational tools to prospectively predict designs to achieve the desired outcome. An alternative approach is to utilize combinatorial experimental tools to empirically explore the range of cellular function and to screen for desired traits. This mini-review focuses on computational systems biology and synthetic biology tools that can be used in combination for prospective in silico strain design.
Tucker, Laura Jane
Under the harsh conditions of limited nutrient and hard growth surface, Paenibacillus dendritiformis in agar plates form two classes of patterns (morphotypes). The first class, called the dendritic morphotype, has radially directed branches. The second class, called the chiral morphotype, exhibits uniform handedness. The dendritic morphotype has been modeled successfully using a continuum model on a regular lattice; however, a suitable computational approach was not known to solve a continuum chiral model. This work details a new computational approach to solving the chiral continuum model of pattern formation in P. dendritiformis. The approach utilizes a random computational lattice and new methods for calculating certain derivative terms found in the model.
Computer-aided diagnostic approach of dermoscopy images acquiring relevant features
Castillejos-Fernández, H.; Franco-Arcega, A.; López-Ortega, O.
2016-09-01
In skin cancer detection, automated analysis of borders, colors, and structures of a lesion relies upon an accurate segmentation process and it is an important first step in any Computer-Aided Diagnosis (CAD) system. However, irregular and disperse lesion borders, low contrast, artifacts in images and variety of colors within the interest region make the problem difficult. In this paper, we propose an efficient approach of automatic classification which considers specific lesion features. First, for the selection of lesion skin we employ the segmentation algorithm W-FCM.1 Then, in the feature extraction stage we consider several aspects: the area of the lesion, which is calculated by correlating axes and we calculate the specific the value of asymmetry in both axes. For color analysis we employ an ensemble of clusterers including K-Means, Fuzzy K-Means and Kohonep maps, all of which estimate the presence of one or more colors defined in ABCD rule and the values for each of the segmented colors. Another aspect to consider is the type of structures that appear in the lesion Those are defined by using the ell-known GLCM method. During the classification stage we compare several methods in order to define if the lesion is benign or malignant. An important contribution of the current approach in segmentation-classification problem resides in the use of information from all color channels together, as well as the measure of each color in the lesion and the axes correlation. The segmentation and classification measures have been performed using sensibility, specificity, accuracy and AUC metric over a set of dermoscopy images from ISDIS data set
Synergy between experimental and computational approaches to homogeneous photoredox catalysis.
Demissie, Taye B; Hansen, Jørn H
2016-07-01
In this Frontiers article, we highlight how state-of-the-art density functional theory calculations can contribute to the field of homogeneous photoredox catalysis. We discuss challenges in the fields and potential solutions to be found at the interface between theory and experiment. The exciting opportunities and insights that can arise through such an interdisciplinary approach are highlighted.
Electromagnetic space-time crystals. II. Fractal computational approach
2014-01-01
A fractal approach to numerical analysis of electromagnetic space-time crystals, created by three standing plane harmonic waves with mutually orthogonal phase planes and the same frequency, is presented. Finite models of electromagnetic crystals are introduced, which make possible to obtain various approximate solutions of the Dirac equation. A criterion for evaluating accuracy of these approximate solutions is suggested.
Efficient and accurate fragmentation methods.
Pruitt, Spencer R; Bertoni, Colleen; Brorsen, Kurt R; Gordon, Mark S
2014-09-16
Conspectus Three novel fragmentation methods that are available in the electronic structure program GAMESS (general atomic and molecular electronic structure system) are discussed in this Account. The fragment molecular orbital (FMO) method can be combined with any electronic structure method to perform accurate calculations on large molecular species with no reliance on capping atoms or empirical parameters. The FMO method is highly scalable and can take advantage of massively parallel computer systems. For example, the method has been shown to scale nearly linearly on up to 131 000 processor cores for calculations on large water clusters. There have been many applications of the FMO method to large molecular clusters, to biomolecules (e.g., proteins), and to materials that are used as heterogeneous catalysts. The effective fragment potential (EFP) method is a model potential approach that is fully derived from first principles and has no empirically fitted parameters. Consequently, an EFP can be generated for any molecule by a simple preparatory GAMESS calculation. The EFP method provides accurate descriptions of all types of intermolecular interactions, including Coulombic interactions, polarization/induction, exchange repulsion, dispersion, and charge transfer. The EFP method has been applied successfully to the study of liquid water, π-stacking in substituted benzenes and in DNA base pairs, solvent effects on positive and negative ions, electronic spectra and dynamics, non-adiabatic phenomena in electronic excited states, and nonlinear excited state properties. The effective fragment molecular orbital (EFMO) method is a merger of the FMO and EFP methods, in which interfragment interactions are described by the EFP potential, rather than the less accurate electrostatic potential. The use of EFP in this manner facilitates the use of a smaller value for the distance cut-off (Rcut). Rcut determines the distance at which EFP interactions replace fully quantum
Computational approaches to identify functional genetic variants in cancer genomes
DEFF Research Database (Denmark)
Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris;
2013-01-01
The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor but only a minority of these drive tumor progression. We present the result of discu...... of discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype....
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach
Collier, Nathan
2011-05-14
We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.
A uniform approach for programming distributed heterogeneous computing systems.
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-12-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.
Cognitive control in majority search: a computational modeling approach.
Wang, Hongbin; Liu, Xun; Fan, Jin
2011-01-01
Despite the importance of cognitive control in many cognitive tasks involving uncertainty, the computational mechanisms of cognitive control in response to uncertainty remain unclear. In this study, we develop biologically realistic neural network models to investigate the instantiation of cognitive control in a majority function task, where one determines the category to which the majority of items in a group belong. Two models are constructed, both of which include the same set of modules representing task-relevant brain functions and share the same model structure. However, with a critical change of a model parameter setting, the two models implement two different underlying algorithms: one for grouping search (where a subgroup of items are sampled and re-sampled until a congruent sample is found) and the other for self-terminating search (where the items are scanned and counted one-by-one until the majority is decided). The two algorithms hold distinct implications for the involvement of cognitive control. The modeling results show that while both models are able to perform the task, the grouping search model fit the human data better than the self-terminating search model. An examination of the dynamics underlying model performance reveals how cognitive control might be instantiated in the brain for computing the majority function.
One-loop kink mass shifts: a computational approach
Alonso-Izquierdo, Alberto
2011-01-01
In this paper we develop a procedure to compute the one-loop quantum correction to the kink masses in generic (1+1)-dimensional one-component scalar field theoretical models. The procedure uses the generalized zeta function regularization method helped by the Gilkey-de Witt asymptotic expansion of the heat function via Mellin's transform. We find a formula for the one-loop kink mass shift that depends only on the part of the energy density with no field derivatives, evaluated by means of a symbolic software algorithm that automates the computation. The improved algorithm with respect to earlier work in this subject has been tested in the sine-Gordon and $\\lambda(\\phi)_2^4$ models. The quantum corrections of the sG-soliton and $\\lambda(\\phi^4)_2$-kink masses have been estimated with a relative error of 0.00006% and 0.00007% respectively. Thereafter, the algorithm is applied to other models. In particular, an interesting one-parametric family of double sine-Gordon models interpolating between the ordinary sine-...
Cognitive control in majority search: A computational modeling approach
Directory of Open Access Journals (Sweden)
Hongbin eWang
2011-02-01
Full Text Available Despite the importance of cognitive control in many cognitive tasks involving uncertainty, the computational mechanisms of cognitive control in response to uncertainty remain unclear. In this study, we develop biologically realistic neural network models to investigate the instantiation of cognitive control in a majority function task, where one determines the category to which the majority of items in a group belong. Two models are constructed, both of which include the same set of modules representing task-relevant brain functions and share the same model structure. However, with a critical change of a model parameter setting, the two models implement two different underlying algorithms: one for grouping search (where a subgroup of items are sampled and re-sampled until a congruent sample is found and the other for self-terminating search (where the items are scanned and counted one-by-one until the majority is decided. The two algorithms hold distinct implications for the involvement of cognitive control. The modeling results show that while both models are able to perform the task, the grouping search model fit the human data better than the self-terminating search model. An examination of the dynamics underlying model performance reveals how cognitive control might be instantiated in the brain via the V4-ACC-LPFC-IPS loop for computing the majority function.
A Computational Differential Geometry Approach to Grid Generation
Liseikin, Vladimir D
2007-01-01
The process of breaking up a physical domain into smaller sub-domains, known as meshing, facilitates the numerical solution of partial differential equations used to simulate physical systems. This monograph gives a detailed treatment of applications of geometric methods to advanced grid technology. It focuses on and describes a comprehensive approach based on the numerical solution of inverted Beltramian and diffusion equations with respect to monitor metrics for generating both structured and unstructured grids in domains and on surfaces. In this second edition the author takes a more detailed and practice-oriented approach towards explaining how to implement the method by: Employing geometric and numerical analyses of monitor metrics as the basis for developing efficient tools for controlling grid properties. Describing new grid generation codes based on finite differences for generating both structured and unstructured surface and domain grids. Providing examples of applications of the codes to the genera...
A Hybrid Approach Towards Intrusion Detection Based on Artificial Immune System and Soft Computing
Sanyal, Sugata
2012-01-01
A number of works in the field of intrusion detection have been based on Artificial Immune System and Soft Computing. Artificial Immune System based approaches attempt to leverage the adaptability, error tolerance, self- monitoring and distributed nature of Human Immune Systems. Whereas Soft Computing based approaches are instrumental in developing fuzzy rule based systems for detecting intrusions. They are computationally intensive and apply machine learning (both supervised and unsupervised) techniques to detect intrusions in a given system. A combination of these two approaches could provide significant advantages for intrusion detection. In this paper we attempt to leverage the adaptability of Artificial Immune System and the computation intensive nature of Soft Computing to develop a system that can effectively detect intrusions in a given network.
Soft computing approach to pattern classification and object recognition a unified concept
Ray, Kumar S
2012-01-01
Soft Computing Approach to Pattern Classification and Object Recognition establishes an innovative, unified approach to supervised pattern classification and model-based occluded object recognition. The book also surveys various soft computing tools, fuzzy relational calculus (FRC), genetic algorithm (GA) and multilayer perceptron (MLP) to provide a strong foundation for the reader. The supervised approach to pattern classification and model-based approach to occluded object recognition are treated in one framework , one based on either a conventional interpretation or a new interpretation of
A computationally efficient approach for template matching-based image registration
Indian Academy of Sciences (India)
Vilas H Gaidhane; Yogesh V Hote; Vijander Singh
2014-04-01
Image registration using template matching is an important step in image processing. In this paper, a simple, robust and computationally efficient approach is presented. The proposed approach is based on the properties of a normalized covariance matrix. The main advantage of the proposed approach is that the image matching can be achieved without calculating eigenvalues and eigenvectors of a covariance matrix, hence reduces the computational complexity. The experimental results show that the proposed approach performs better in the presence of various noises and rigid geometric transformations.
Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...
Method in computer ethics: Towards a multi-level interdisciplinary approach
Brey, Philip
2000-01-01
This essay considers methodological aspects ofcomputer ethics and argues for a multi-levelinterdisciplinary approach with a central role forwhat is called disclosive computer ethics. Disclosivecomputer ethics is concerned with the moraldeciphering of embedded values and norms in computersystems, app
Energy Technology Data Exchange (ETDEWEB)
Chacón, Enrique, E-mail: echacon@icmm.csic.es [Instituto de Ciencia de Materiales de Madrid, CSIC, 28049 Madrid, Spain and Instituto de Ciencia de Materiales Nicolás Cabrera, Universidad Autónoma de Madrid, Madrid 28049 (Spain); Tarazona, Pedro, E-mail: pedro.tarazona@uam.es [Departamento de Física Teórica de la Materia Condensada, Condensed Matter Physics Center (IFIMAC), and Instituto de Ciencia de Materiales Nicolás Cabrera, Universidad Autónoma de Madrid, Madrid 28049 (Spain); Bresme, Fernando, E-mail: f.bresme@imperial.ac.uk [Department of Chemistry, Imperial College London, SW7 2AZ London (United Kingdom)
2015-07-21
We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke’s law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.
A Computational Approach to Politeness with Application to Social Factors
Danescu-Niculescu-Mizil, Cristian; Jurafsky, Dan; Leskovec, Jure; Potts, Christopher
2013-01-01
We propose a computational framework for identifying linguistic aspects of politeness. Our starting point is a new corpus of requests annotated for politeness, which we use to evaluate aspects of politeness theory and to uncover new interactions between politeness markers and context. These findings guide our construction of a classifier with domain-independent lexical and syntactic features operationalizing key components of politeness theory, such as indirection, deference, impersonalization and modality. Our classifier achieves close to human performance and is effective across domains. We use our framework to study the relationship between politeness and social power, showing that polite Wikipedia editors are more likely to achieve high status through elections, but, once elevated, they become less polite. We see a similar negative correlation between politeness and power on Stack Exchange, where users at the top of the reputation scale are less polite than those at the bottom. Finally, we apply our class...
Computational approaches for efficiently modelling of small atmospheric clusters
DEFF Research Database (Denmark)
Elm, Jonas; Mikkelsen, Kurt Valentin
2014-01-01
Utilizing a comprehensive test set of 205 clusters of atmospheric relevance, we investigate how different DFT functionals (M06-2X, PW91, ωB97X-D) and basis sets (6-311++G(3df,3pd), 6-31++G(d,p), 6-31+G(d)) affect the thermal contribution to the Gibbs free energy and single point energy. Reducing...... the basis set used in the geometry and frequency calculation from 6-311++G(3df,3pd) → 6-31++G(d,p) implies a significant speed-up in computational time and only leads to small errors in the thermal contribution to the Gibbs free energy and subsequent coupled cluster single point energy calculation....
MADLVF: An Energy Efficient Resource Utilization Approach for Cloud Computing
Directory of Open Access Journals (Sweden)
J.K. Verma
2014-06-01
Full Text Available Last few decades have remained the witness of steeper growth in demand for higher computational power. It is merely due to shift from the industrial age to Information and Communication Technology (ICT age which was marginally the result of digital revolution. Such trend in demand caused establishment of large-scale data centers situated at geographically apart locations. These large-scale data centers consume a large amount of electrical energy which results into very high operating cost and large amount of carbon dioxide (CO2 emission due to resource underutilization. We propose MADLVF algorithm to overcome the problems such as resource underutilization, high energy consumption, and large CO2 emissions. Further, we present a comparative study between the proposed algorithm and MADRS algorithms showing proposed methodology outperforms over the existing one in terms of energy consumption and the number of VM migrations.
Crack Propagation in Honeycomb Cellular Materials: A Computational Approach
Directory of Open Access Journals (Sweden)
Marco Paggi
2012-02-01
Full Text Available Computational models based on the finite element method and linear or nonlinear fracture mechanics are herein proposed to study the mechanical response of functionally designed cellular components. It is demonstrated that, via a suitable tailoring of the properties of interfaces present in the meso- and micro-structures, the tensile strength can be substantially increased as compared to that of a standard polycrystalline material. Moreover, numerical examples regarding the structural response of these components when subjected to loading conditions typical of cutting operations are provided. As a general trend, the occurrence of tortuous crack paths is highly favorable: stable crack propagation can be achieved in case of critical crack growth, whereas an increased fatigue life can be obtained for a sub-critical crack propagation.
Economic growth rate management by soft computing approach
Maksimović, Goran; Jović, Srđan; Jovanović, Radomir
2017-01-01
Economic growth rate management is very important process in order to improve the economic stability of any country. The main goal of the study was to manage the impact of agriculture, manufacturing, industry and services on the economic growth rate prediction. Soft computing methodology was used in order to select the inputs influence on the economic growth rate prediction. It is known that the economic growth may be developed on the basis of combination of different factors. Gross domestic product (GDP) was used as economic growth indicator. It was found services have the highest impact on the GDP growth rate. On the contrary, the manufacturing has the smallest impact on the GDP growth rate.
A computational approach to the twin paradox in curved spacetime
Fung, Kenneth K H; Lewis, Geraint F; Wu, Xiaofeng
2016-01-01
Despite being a major component in the teaching of special relativity, the twin `paradox' is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.
Computational Approaches for Modeling the Multiphysics in Pultrusion Process
DEFF Research Database (Denmark)
Carlone, P.; Baran, Ismet; Hattel, Jesper Henri;
2013-01-01
Pultrusion is a continuousmanufacturing process used to produce high strength composite profiles with constant cross section.The mutual interactions between heat transfer, resin flow and cure reaction, variation in the material properties, and stress/distortion evolutions strongly affect...... the process dynamics together with the mechanical properties and the geometrical precision of the final product. In the present work, pultrusion process simulations are performed for a unidirectional (UD) graphite/epoxy composite rod including several processing physics, such as fluid flow, heat transfer......, chemical reaction, and solid mechanics. The pressure increase and the resin flow at the tapered inlet of the die are calculated by means of a computational fluid dynamics (CFD) finite volume model. Several models, based on different homogenization levels and solution schemes, are proposed and compared...
A computational approach to the twin paradox in curved spacetime
Fung, Kenneth K. H.; Clark, Hamish A.; Lewis, Geraint F.; Wu, Xiaofeng
2016-09-01
Despite being a major component in the teaching of special relativity, the twin ‘paradox’ is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.
Perturbation approach for nuclear magnetic resonance solid-state quantum computation
Directory of Open Access Journals (Sweden)
G. P. Berman
2003-01-01
Full Text Available A dynamics of a nuclear-spin quantum computer with a large number (L=1000 of qubits is considered using a perturbation approach. Small parameters are introduced and used to compute the error in an implementation of an entanglement between remote qubits, using a sequence of radio-frequency pulses. The error is computed up to the different orders of the perturbation theory and tested using exact numerical solution.
Kerfriden, Pierre; Goury, Olivier; Khac Chi, Hoang; Bordas, Stéphane
2014-01-01
Computational homogenisation is a widely spread technique to calculate the overall properties of a composite material from the knowledge of the constitutive laws of its microscopic constituents [1, 2]. Indeed, it relies on fewer assumptions than analytical or semi-analytical homogenisation approaches and can be used to coarse-grain a large range of micro-mechanical models. However, this accuracy comes at large computational costs, which prevents computational homogenisation from b...
Safe manning of merchant ships: an approach and computer tool
DEFF Research Database (Denmark)
Alapetite, Alexandre; Kozin, Igor
2017-01-01
In the shipping industry, staffing expenses have become a vital competition parameter. In this paper, an approach and a software tool are presented to support decisions on the staffing of merchant ships. The tool is implemented in the form of a Web user interface that makes use of discrete-event...... simulation and allows estimation of the workload and of whether different scenarios are successfully performed taking account of the number of crewmembers, watch schedules, distribution of competencies, and others. The software library ‘SimManning’ at the core of the project is provided as open source...
Canonical approach to finite density QCD with multiple precision computation
Fukuda, Ryutaro; Oka, Shotaro
2015-01-01
We calculate the baryon chemical potential ($\\mu_B$) dependence of thermodynamic observables, i.e., pressure, baryon number density and susceptibility by lattice QCD using the canonical approach. We compare the results with those by the multi parameter reweighting (MPR) method; Both methods give very consistent values in the regions where errors of the MPR are under control. The canonical method gives reliable results over $\\mu_ B/T=3$,with $T$ being temperature. Multiple precision operations play an important roll in the evaluation of canonical partition functions.
Data analysis of asymmetric structures advanced approaches in computational statistics
Saito, Takayuki
2004-01-01
Data Analysis of Asymmetric Structures provides a comprehensive presentation of a variety of models and theories for the analysis of asymmetry and its applications and provides a wealth of new approaches in every section. It meets both the practical and theoretical needs of research professionals across a wide range of disciplines and considers data analysis in fields such as psychology, sociology, social science, ecology, and marketing. In seven comprehensive chapters this guide details theories, methods, and models for the analysis of asymmetric structures in a variety of disciplines and presents future opportunities and challenges affecting research developments and business applications.
Flexibility and practicality graz brain-computer interface approach.
Scherer, Reinhold; Müller-Putz, Gernot R; Pfurtscheller, Gert
2009-01-01
"Graz brain-computer interface (BCI)" transforms changes in oscillatory electroencephalogram (EEG) activity into control signals for external devices and feedback. Steady-state evoked potentials (SSEPs) and event-related desynchronization (ERD) are employed to encode user messages. User-specific setup and training are important issues for robust and reliable classification. Furthermore, in order to implement small and thus affordable systems, focus is put on the minimization of the number of EEG sensors. The system also supports the self-paced operation mode, that is, users have on-demand access to the system at any time and can autonomously initiate communication. Flexibility, usability, and practicality are essential to increase user acceptance. Here, we illustrate the possibilities offered by now from EEG-based communication. Results of several studies with able-bodied and disabled individuals performed inside the laboratory and in real-world environments are presented; their characteristics are shown and open issues are mentioned. The applications include the control of neuroprostheses and spelling devices, the interaction with Virtual Reality, and the operation of off-the-shelf software such as Google Earth.
Non-racemic mixture model: a computational approach.
Polanco, Carlos; Buhse, Thomas
2017-01-01
The behavior of a slight chiral bias in favor of l-amino acids over d-amino acids was studied in an evolutionary mathematical model generating mixed chiral peptide hexamers. The simulations aimed to reproduce a very generalized prebiotic scenario involving a specified couple of amino acid enantiomers and a possible asymmetric amplification through autocatalytic peptide self-replication while forming small multimers of a defined length. Our simplified model allowed the observation of a small ascending but not conclusive tendency in the l-amino acid over the d-amino acid profile for the resulting mixed chiral hexamers in computer simulations of 100 peptide generations. This simulation was carried out by changing the chiral bias from 1% to 3%, in three stages of 15, 50 and 100 generations to observe any alteration that could mean a drastic change in behavior. So far, our simulations lead to the assumption that under the exposure of very slight non-racemic conditions, a significant bias between l- and d-amino acids, as present in our biosphere, was unlikely generated under prebiotic conditions if autocatalytic peptide self-replication was the main or the only driving force of chiral auto-amplification.
Computational approaches to 3D modeling of RNA
Energy Technology Data Exchange (ETDEWEB)
Laing, Christian; Schlick, Tamar, E-mail: schlick@nyu.ed [Department of Chemistry and Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012 (United States)
2010-07-21
Many exciting discoveries have recently revealed the versatility of RNA and its importance in a variety of functions within the cell. Since the structural features of RNA are of major importance to their biological function, there is much interest in predicting RNA structure, either in free form or in interaction with various ligands, including proteins, metabolites and other molecules. In recent years, an increasing number of researchers have developed novel RNA algorithms for predicting RNA secondary and tertiary structures. In this review, we describe current experimental and computational advances and discuss recent ideas that are transforming the traditional view of RNA folding. To evaluate the performance of the most recent RNA 3D folding algorithms, we provide a comparative study in order to test the performance of available 3D structure prediction algorithms for an RNA data set of 43 structures of various lengths and motifs. We find that the algorithms vary widely in terms of prediction quality across different RNA lengths and topologies; most predictions have very large root mean square deviations from the experimental structure. We conclude by outlining some suggestions for future RNA folding research. (topical review)
A computational toy model for shallow landslides: Molecular Dynamics approach
Martelloni, Gianluca; Massaro, Emanuele
2012-01-01
The aim of this paper is to propose a 2D computational algorithm for modeling of the trigger and the propagation of shallow landslides caused by rainfall. We used a Molecular Dynamics (MD) inspired model, similar to discrete element method (DEM), that is suitable to model granular material and to observe the trajectory of single particle, so to identify its dynamical properties. We consider that the triggering of shallow landslides is caused by the decrease of the static friction along the sliding surface due to water infiltration by rainfall. Thence the triggering is caused by two following conditions: (a) a threshold speed of the particles and (b) a condition on the static friction, between particles and slope surface, based on the Mohr-Coulomb failure criterion. The latter static condition is used in the geotechnical model to estimate the possibility of landslide triggering. Finally the interaction force between particles is defined trough a potential that, in the absence of experimental data, we have mode...
Localized tissue mineralization regulated by bone remodelling: A computational approach
Decco, Oscar; Adams, George; Cook, Richard B.; García Aznar, José Manuel
2017-01-01
Bone is a living tissue whose main mechanical function is to provide stiffness, strength and protection to the body. Both stiffness and strength depend on the mineralization of the organic matrix, which is constantly being remodelled by the coordinated action of the bone multicellular units (BMUs). Due to the dynamics of both remodelling and mineralization, each sample of bone is composed of structural units (osteons in cortical and packets in cancellous bone) created at different times, therefore presenting different levels of mineral content. In this work, a computational model is used to understand the feedback between the remodelling and the mineralization processes under different load conditions and bone porosities. This model considers that osteoclasts primarily resorb those parts of bone closer to the surface, which are younger and less mineralized than older inner ones. Under equilibrium loads, results show that bone volumes with both the highest and the lowest levels of porosity (cancellous and cortical respectively) tend to develop higher levels of mineral content compared to volumes with intermediate porosity, thus presenting higher material densities. In good agreement with recent experimental measurements, a boomerang-like pattern emerges when plotting apparent density at the tissue level versus material density at the bone material level. Overload and disuse states are studied too, resulting in a translation of the apparent–material density curve. Numerical results are discussed pointing to potential clinical applications. PMID:28306746
Granular computing and decision-making interactive and iterative approaches
Chen, Shyi-Ming
2015-01-01
This volume is devoted to interactive and iterative processes of decision-making– I2 Fuzzy Decision Making, in brief. Decision-making is inherently interactive. Fuzzy sets help realize human-machine communication in an efficient way by facilitating a two-way interaction in a friendly and transparent manner. Human-centric interaction is of paramount relevance as a leading guiding design principle of decision support systems. The volume provides the reader with an updated and in-depth material on the conceptually appealing and practically sound methodology and practice of I2 Fuzzy Decision Making. The book engages a wealth of methods of fuzzy sets and Granular Computing, brings new concepts, architectures and practice of fuzzy decision-making providing the reader with various application studies. The book is aimed at a broad audience of researchers and practitioners in numerous disciplines in which decision-making processes play a pivotal role and serve as a vehicle to produce solutions to existing prob...
Partial order approach to compute shortest paths in multimodal networks
Ensor, Andrew
2011-01-01
Many networked systems involve multiple modes of transport. Such systems are called multimodal, and examples include logistic networks, biomedical phenomena, manufacturing process and telecommunication networks. Existing techniques for determining optimal paths in multimodal networks have either required heuristics or else application-specific constraints to obtain tractable problems, removing the multimodal traits of the network during analysis. In this paper weighted coloured--edge graphs are introduced to model multimodal networks, where colours represent the modes of transportation. Optimal paths are selected using a partial order that compares the weights in each colour, resulting in a Pareto optimal set of shortest paths. This approach is shown to be tractable through experimental analyses for random and real multimodal networks without the need to apply heuristics or constraints.
Computer Mediated Social Network Approach to Software Support and Maintenance
2010-06-01
mathematics (Euler, 1741; Sachs, Stiebitz, & Wilson, 1988), philosophy ( Durkheim , 2001), the social science domain (Granovetter 1973; 1983; Milgram, 1967...to philosophy ( Durkheim , 2001), to the strength of the connections a (Granovetter 1973; Granovetter, 1983) and the number of connections (Milgram...Qualitative, quantitative, and mixed method approaches (Second ed.) Sage Publications Inc. Durkheim , É. (2001). The elementary forms of religious life, New
Computational and Experimental Approaches to Cancer Biomarker Discovery
DEFF Research Database (Denmark)
Krzystanek, Marcin
with a purely biological, experimental approach where the effects of treatment with cytotoxic agents or defects in DNA repair mechanisms can be individually quantified and turned into mutational signatures.In the second part of the thesis I present work towards identification and improvement of the current......Effective cancer treatment requires good biomarkers: measurable indicators of some biological state or condition that constitute the cornerstone of personalized medicine. Prognostic biomarkers provide information about the likely course of the disease, while predictive biomarkers enable prediction...... of a patient’s response to a particular treatment, thus helping to avoid unnecessary treatment and unwanted side effects in non-responding individuals.Currently biomarker discovery is facilitated by recent advances in high-throughput technologies when association between a given biological phenotype...
Fault-tolerant quantum computation -- a dynamical systems approach
Fern, J; Simic, S; Sastry, S; Fern, Jesse; Kempe, Julia; Simic, Slobodan; Sastry, Shankar
2004-01-01
We apply a dynamical systems approach to concatenation of quantum error correcting codes, extending and generalizing the results of Rahn et al. [8] to both diagonal and nondiagonal channels. Our point of view is global: instead of focusing on particular types of noise channels, we study the geometry of the coding map as a discrete-time dynamical system on the entire space of noise channels. In the case of diagonal channels, we show that any code with distance at least three corrects (in the infinite concatenation limit) an open set of errors. For CSS codes, we give a more precise characterization of that set. We show how to incorporate noise in the gates, thus completing the framework. We derive some general bounds for noise channels, which allows us to analyze several codes in detail.
Computational Approaches to Consecutive Pattern Avoidance in Permutations
Nakamura, Brian
2011-01-01
In recent years, there has been increasing interest in consecutive pattern avoidance in permutations. In this paper, we introduce two approaches to counting permutations that avoid a set of prescribed patterns consecutively. These algoritms have been implemented in the accompanying Maple package CAV, which can be downloaded from the author's website. As a byproduct of the first algorithm, we have a theorem giving a sufficient condition for when two pattern sets are strongly (consecutively) Wilf-Equivalent. For the implementation of the second algorithm, we define the cluster tail generating function and show that it always satisfies a certain functional equation. We also explain how the CAV package can be used to approximate asymptotic constants for single pattern avoidance.
Data science in R a case studies approach to computational reasoning and problem solving
Nolan, Deborah
2015-01-01
Effectively Access, Transform, Manipulate, Visualize, and Reason about Data and ComputationData Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions. The book's collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including: Non-standar
Moskon, Miha; Mraz, Miha
2014-01-01
We present several measures that can be used in de novo computational design of biological systems with information processing capabilities. Their main purpose is to objectively evaluate the behavior and identify the biological information processing structures with the best dynamical properties. They can be used to define constraints that allow one to simplify the design of more complex biological systems. These measures can be applied to existent computational design approaches in synthetic biology, i.e., rational and automatic design approaches. We demonstrate their use on a) the computational models of several basic information processing structures implemented with gene regulatory networks and b) on a modular design of a synchronous toggle switch.
Wang, Mengyu; Brigham, John C.
2017-03-01
A computationally efficient gradient-based optimization approach for inverse material characterization from incomplete system response measurements that can utilize a generally applicable parameterization (e.g., finite element-type parameterization) is presented and evaluated. The key to this inverse characterization algorithm is the use of a direct inversion strategy with Gappy proper orthogonal decomposition (POD) response field estimation to initialize the inverse solution estimate prior to gradient-based optimization. Gappy POD is used to estimate the complete (i.e., all components over the entire spatial domain) system response field from incomplete (e.g., partial spatial distribution) measurements obtained from some type of system testing along with some amount of a priori information regarding the potential distribution of the unknown material property. The estimated complete system response is used within a physics-based direct inversion procedure with a finite element-type parameterization to estimate the spatial distribution of the desired unknown material property with minimal computational expense. Then, this estimated spatial distribution of the unknown material property is used to initialize a gradient-based optimization approach, which uses the adjoint method for computationally efficient gradient calculations, to produce the final estimate of the material property distribution. The three-step [(1) Gappy POD, (2) direct inversion, and (3) gradient-based optimization] inverse characterization approach is evaluated through simulated test problems based on the characterization of elastic modulus distributions with localized variations (e.g., inclusions) within simple structures. Overall, this inverse characterization approach is shown to efficiently and consistently provide accurate inverse characterization estimates for material property distributions from incomplete response field measurements. Moreover, the solution procedure is shown to be capable
A New Approach to Practical Active-Secure Two-Party Computation
DEFF Research Database (Denmark)
Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio
2012-01-01
We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao’s garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...
A New Approach to Practical Active-Secure Two-Party Computation
DEFF Research Database (Denmark)
Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio
2011-01-01
We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao's garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...
Hwang, Gwo-Jen; Sung, Han-Yu; Hung, Chun-Ming; Yang, Li-Hsueh; Huang, Iwen
2013-01-01
Educational computer games have been recognized as being a promising approach for motivating students to learn. Nevertheless, previous studies have shown that without proper learning strategies or supportive models, the learning achievement of students might not be as good as expected. In this study, a knowledge engineering approach is proposed…
Petra, Cosmin G.
2014-01-01
We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.
An Integrative Computational Approach for Prioritization of Genomic Variants
Wang, Sheng; Meyden, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; Paciorkowski, Alex R.; Mirzaa, Ghayda M.; Dave, Paul; Agam, Gady; Xu, Jinbo; Al-Gazali, Lihadh; Mason, Christopher E.; Ross, M. Elizabeth; Maltsev, Natalia; Gilliam, T. Conrad
2014-01-01
An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidate genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest. PMID:25506935
An integrative computational approach for prioritization of genomic variants.
Directory of Open Access Journals (Sweden)
Inna Dubchak
Full Text Available An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB, and the RaptorX server, for the identification of high-confidence candidate genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest.
Effects of artificial gravity on the cardiovascular system: Computational approach
Diaz Artiles, Ana; Heldt, Thomas; Young, Laurence R.
2016-09-01
steady-state cardiovascular behavior during sustained artificial gravity and exercise. Further validation of the model was performed using experimental data from the combined exercise and artificial gravity experiments conducted on the MIT CRC, and these results will be presented separately in future publications. This unique computational framework can be used to simulate a variety of centrifuge configuration and exercise intensities to improve understanding and inform decisions about future implementation of artificial gravity in space.
A Novel Approach for Reduce Energy Consumption in Mobile Cloud Computing
Directory of Open Access Journals (Sweden)
Najmeh Moghadasi
2015-09-01
Full Text Available In recent years, using mobile devices has a special place in human life and applicability of these devices leads to increased number of users. Business companies have integrated them with cloud computing technology and have provided mobile cloud in order to improve using mobile devices and overcome the energy consumption of mobile devices. In mobile cloud computing, computations and storages of mobile devices applications are transferred to cloud data centers and mobile devices are used merely as user interface to access services. Therefore, cloud computing will help to reduce energy consumption of mobile devices. In this paper, a new approach is given to reduce energy consumption of based on Learning Automata in mobile cloud computing. Simulation results show that our proposed approach dramatically saves energy consumption through determining the appropriate location for application.
Energy Technology Data Exchange (ETDEWEB)
Fillippi, Anthony [Texas A& M University; Bhaduri, Budhendra L [ORNL; Naughton, III, Thomas J [ORNL; King, Amy L [ORNL; Scott, Stephen L [ORNL; Guneralp, Inci [Texas A& M University
2012-01-01
For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent- and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required {approx}100 hours until termination, whereas a parallel approach required only {approx}2.5 hours (42 compute nodes) - a 40x speed-up. Tools developed for this parallel execution are discussed.
Energy Technology Data Exchange (ETDEWEB)
Filippi, Anthony M [ORNL; Bhaduri, Budhendra L [ORNL; Naughton, III, Thomas J [ORNL; King, Amy L [ORNL; Scott, Stephen L [ORNL; Guneralp, Inci [Texas A& M University
2012-01-01
Abstract For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent-and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required ~100 hours until termination, whereas a parallel approach required only ~2.5 hours (42 compute nodes) a 40x speed-up. Tools developed for this parallel execution are discussed.
Energy-aware memory management for embedded multimedia systems a computer-aided design approach
Balasa, Florin
2011-01-01
Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques
A computational approach to understand in vitro alveolar morphogenesis.
Directory of Open Access Journals (Sweden)
Sean H J Kim
Full Text Available Primary human alveolar type II (AT II epithelial cells maintained in Matrigel cultures form alveolar-like cysts (ALCs using a cytogenesis mechanism that is different from that of other studied epithelial cell types: neither proliferation nor death is involved. During ALC formation, AT II cells engage simultaneously in fundamentally different, but not fully characterized activities. Mechanisms enabling these activities and the roles they play during different process stages are virtually unknown. Identifying, characterizing, and understanding the activities and mechanisms are essential to achieving deeper insight into this fundamental feature of morphogenesis. That deeper insight is needed to answer important questions. When and how does an AT cell choose to switch from one activity to another? Why does it choose one action rather than another? We report obtaining plausible answers using a rigorous, multi-attribute modeling and simulation approach that leveraged earlier efforts by using new, agent and object-oriented capabilities. We discovered a set of cell-level operating principles that enabled in silico cells to self-organize and generate systemic cystogenesis phenomena that are quantitatively indistinguishable from those observed in vitro. Success required that the cell components be quasi-autonomous. As simulation time advances, each in silico cell autonomously updates its environment information to reclassify its condition. It then uses the axiomatic operating principles to execute just one action for each possible condition. The quasi-autonomous actions of individual in silico cells were sufficient for developing stable cyst-like structures. The results strengthen in silico to in vitro mappings at three levels: mechanisms, behaviors, and operating principles, thereby achieving a degree of validation and enabling answering the questions posed. We suggest that the in silico operating principles presented may have a biological counterpart
Glacial landscape evolution by subglacial quarrying: A multiscale computational approach
Ugelvig, Sofie V.; Egholm, David L.; Iverson, Neal R.
2016-11-01
Quarrying of bedrock is a primary agent of subglacial erosion. Although the mechanical theory behind the process has been studied for decades, it has proven difficult to formulate the governing principles so that large-scale landscape evolution models can be used to integrate erosion over time. The existing mechanical theory thus stands largely untested in its ability to explain postglacial topography. In this study we relate the physics of quarrying to long-term landscape evolution with a multiscale approach that connects meter-scale cavities to kilometer-scale glacial landscapes. By averaging the quarrying rate across many small-scale bedrock steps, we quantify how regional trends in basal sliding speed, effective pressure, and bed slope affect the rate of erosion. A sensitivity test indicates that a power law formulated in terms of these three variables provides an acceptable basis for quantifying regional-scale rates of quarrying. Our results highlight the strong influence of effective pressure, which intensifies quarrying by increasing the volume of the bed that is stressed by the ice and thereby the probability of rock failure. The resulting pressure dependency points to subglacial hydrology as a primary factor for influencing rates of quarrying and hence for shaping the bedrock topography under warm-based glaciers. When applied in a landscape evolution model, the erosion law for quarrying produces recognizable large-scale glacial landforms: U-shaped valleys, hanging valleys, and overdeepenings. The landforms produced are very similar to those predicted by more standard sliding-based erosion laws, but overall quarrying is more focused in valleys, and less effective at higher elevations.
Directory of Open Access Journals (Sweden)
Friedrich Jan O
2009-01-01
, although far from statistically significant. Conclusion We have shown that alternative reasonable methodological approaches to the rosiglitazone meta-analysis can yield increased or decreased risks that are either statistically significant or not significant at the p = 0.05 level for both myocardial infarction and cardiovascular death. Completion of ongoing trials may help to generate more accurate estimates of rosiglitazone's effect on cardiovascular outcomes. However, given that almost all point estimates suggest harm rather than benefit and the availability of alternative agents, the use of rosiglitazone may greatly decline prior to more definitive safety data being generated.
Use of Integrated Computational Approaches in the Search for New Therapeutic Agents.
Persico, Marco; Di Dato, Antonio; Orteca, Nausicaa; Cimino, Paola; Novellino, Ettore; Fattorusso, Caterina
2016-09-01
Computer-aided drug discovery plays a strategic role in the development of new potential therapeutic agents. Nevertheless, the modeling of biological systems still represents a challenge for computational chemists and at present a single computational method able to face such challenge is not available. This prompted us, as computational medicinal chemists, to develop in-house methodologies by mixing various bioinformatics and computational tools. Importantly, thanks to multi-disciplinary collaborations, our computational studies were integrated and validated by experimental data in an iterative process. In this review, we describe some recent applications of such integrated approaches and how they were successfully applied in i) the search of new allosteric inhibitors of protein-protein interactions and ii) the development of new redox-active antimalarials from natural leads.
Merced-Grafals, Emmanuelle J.; Dávila, Noraica; Ge, Ning; Williams, R. Stanley; Strachan, John Paul
2016-09-01
Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 106 cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.
Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul
2016-09-09
Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.
Directory of Open Access Journals (Sweden)
Roshan N. RAJAPAKSE
2016-03-01
Full Text Available Accurate identification of anatomical structures from medical imaging data is a significant and critical function in the medical domain. Past studies in this context have mainly utilized two main approaches, the knowledge and learning methodologies based methods. Further, most of previous reported studies have focused on identification of landmarks from lateral X-ray Computed Tomography (CT data, particularly in the field of orthodontics. However, this study focused on extracting cranial landmarks from large sets of cross sectional CT slices using a combined method of the two aforementioned approaches. The proposed method of this study is centered mainly on template data sets, which were created using the actual contour patterns extracted from CT cases for each of the landmarks in consideration. Firstly, these templates were used to devise rules which are a characteristic of the knowledge based method. Secondly, the same template sets were employed to perform template matching related to the learning methodologies approach. The proposed method was tested on two landmarks, the Dorsum sellae and the Pterygoid plate, using CT cases of 5 subjects. The results indicate that, out of the 10 tests, the output images were within the expected range (desired accuracy in 7 instances and acceptable range (near accuracy for 2 instances, thus verifying the effectiveness of the combined template sets centric approach proposed in this study.
A Multi-step and Multi-level approach for Computer Aided Molecular Design
DEFF Research Database (Denmark)
A general multi-step approach for setting up, solving and solution analysis of computer aided molecular design (CAMD) problems is presented. The approach differs from previous work within the field of CAMD since it also addresses the need for a computer aided problem formulation and result analysis....... The problem formulation step incorporates a knowledge base for the identification and setup of the design criteria. Candidate compounds are identified using a multi-level generate and test CAMD solution algorithm capable of designing molecules having a high level of molecular detail. A post solution step...... using an Integrated Computer Aided System (ICAS) for result analysis and verification is included in the methodology. Keywords: CAMD, separation processes, knowledge base, molecular design, solvent selection, substitution, group contribution, property prediction, ICAS Introduction The use of Computer...
Medical imaging in clinical applications algorithmic and computer-based approaches
Bhateja, Vikrant; Hassanien, Aboul
2016-01-01
This volume comprises of 21 selected chapters, including two overview chapters devoted to abdominal imaging in clinical applications supported computer aided diagnosis approaches as well as different techniques for solving the pectoral muscle extraction problem in the preprocessing part of the CAD systems for detecting breast cancer in its early stage using digital mammograms. The aim of this book is to stimulate further research in medical imaging applications based algorithmic and computer based approaches and utilize them in real-world clinical applications. The book is divided into four parts, Part-I: Clinical Applications of Medical Imaging, Part-II: Classification and clustering, Part-III: Computer Aided Diagnosis (CAD) Tools and Case Studies and Part-IV: Bio-inspiring based Computer Aided diagnosis techniques. .
Mathematics of shape description a morphological approach to image processing and computer graphics
Ghosh, Pijush K
2009-01-01
Image processing problems are often not well defined because real images are contaminated with noise and other uncertain factors. In Mathematics of Shape Description, the authors take a mathematical approach to address these problems using the morphological and set-theoretic approach to image processing and computer graphics by presenting a simple shape model using two basic shape operators called Minkowski addition and decomposition. This book is ideal for professional researchers and engineers in Information Processing, Image Measurement, Shape Description, Shape Representation and Computer Graphics. Post-graduate and advanced undergraduate students in pure and applied mathematics, computer sciences, robotics and engineering will also benefit from this book. Key FeaturesExplains the fundamental and advanced relationships between algebraic system and shape description through the set-theoretic approachPromotes interaction of image processing geochronology and mathematics in the field of algebraic geometryP...
Quantum computation with prethreshold superconducting qubits: Single-excitation subspace approach
Galiautdinov, Andrei
2011-01-01
We describe an alternative approach to quantum computation that is ideally suited for today's sub-threshold-fidelity qubits, and which can be applied to a family of hardware models that includes superconducting qubits with tunable coupling. In this approach, the computation on an n-qubit processor is carried out in the n-dimensional single-excitation subspace (SES) of the full 2^n-dimensional Hilbert space. Because any real Hamiltonian can be directly generated in the SES [E. J. Pritchett et al., arXiv:1008.0701], high-dimensional unitary operations can be carried out in a single step, bypassing the need to decompose into single- and two-qubit gates. Although technically nonscalable and unsuitable for applications (including Shor's) requiring enormous Hilbert spaces, this approach would make practical a first-generation quantum computer capable of achieving significant quantum speedup.
Application of Intelligent Data Mining Approach in Securing the Cloud Computing
Directory of Open Access Journals (Sweden)
Hanna M. Said
2016-09-01
Full Text Available Cloud computing is a modern term refers to a model for emerging computing, where it is possible to use machines in large data centers for delivering services in a scalable manner, so corporations has become in need for large scale inexpensive computing. Recently, several governments have begun to utilize cloud computing architectures, applications and platforms for meeting the needs of their constituents and delivering services. Security occupies the first rank of obstacles that face cloud computing for governmental agencies and businesses. Cloud computing is surrounded by many risks that may have major effects on services and information supported via this technology. Also, Cloud Computing is one of the promising technology in which the scientific community has recently encountered. Cloud computing is related to other research areas such as distributed and grid computing, Service-Oriented Architecture, and virtualization, as cloud computing inherited their limitations and advancements. It is possible to exploit new opportunities for security. This paper aim is to discuss and analyze how achieve mitigation for cloud computing security risks as a basic step towards obtaining secure and safe environment for cloud computing. The results showed that, Using a simple decision tree model Chaid algorithm security rating for classifying approach is a robust technique that enables the decision-maker to measure the extent of cloud securing, and the provided services. It was proved throughout this paper that policies, standards, and controls are critical in management process to safeguard and protect the systems as well as data. The management process should analyze and understand cloud computing risks for protecting systems and data from security exploits
D-Wave's Approach to Quantum Computing: 1000-qubits and Counting!
CERN. Geneva
2017-01-01
In this talk I will describe D-Wave's approach to quantum computing, including the system architecture of our 1000-qubit D-Wave 2X, its programming model, and performance benchmarks. Furthermore, I will describe how the native optimization and sampling capabilities of the quantum processor can be exploited to tackle problems in a variety of fields including medicine, machine learning, physics, and computational finance.
Patel, Samir B; Ambegaokar, Saumitra U
2009-01-01
With the current development of multiprocessor systems, strive for computing data on such processor have also increased exponentially. If the multi core processors are not fully utilized, then even though we have the computing power the speed is not available to the end users for their respective applications. In accordance to this, the users or application designers also have to design newer applications taking care of the computing infrastructure available within. Our approach is to use the CUDA (Compute Unified Device Architecture) as backend and MATLAB as the front end to design an application for implementing steganography. Steganography is the term used for hiding information in the cover object like Image, Audio or Video data. As the computing required for multimedia data is much more than the text information, we have been successful in implementing image Steganography with the help of technology for the next generation.
A Novel Approach for Optimal Multimedia Data Distribution in Mobile Cloud Computing
Directory of Open Access Journals (Sweden)
Pham Phuoc Hung
2014-01-01
Full Text Available With the integration of mobile computing and cloud computing, more diverse services can be provided to the users, supporting the mobility feature of mobile computing along with the power of cloud computing. This new paradigm still faces challenges, especially in terms of performance. When it comes to multimedia data communication, thin clients (such as smart phones and tablets suffer because of performance and power constraints. Previously done studies have trivially addressed this problem. Therefore, in our paper, we present a framework in which thick clients (laptop or desktop computers are incorporated into mobile cloud paradigm with attention paid to user mobility. Its objective is to optimize the distribution of multimedia content between the cloud and the thin clients. Our work comes up with both numerical analysis and simulation to justify the validity and the effectiveness of the proposal approach.
A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises
DEFF Research Database (Denmark)
Kuada, Eric; Olesen, Henning
2011-01-01
will facilitate the adoption process of cloud computing services by enterprises. OCCS deals with the concept of enterprises taking advantage of cloud computing services to meet their business needs without having to pay or paying a minimal fee for the services. The OCCS network will be modelled and implemented...... as a social network of enterprises collaborating strategically for the provisioning and consumption of cloud computing services without entering into any business agreements. We conclude that it is possible to configure current cloud service technologies and management tools for OCCS but there is a need......This paper proposes a social network approach to the provisioning and management of cloud computing services termed Opportunistic Cloud Computing Services (OCCS), for enterprises; and presents the research issues that need to be addressed for its implementation. We hypothesise that OCCS...
Mayorga, René V; Carrera, Jonathan
2007-06-01
This Paper presents an efficient approach for the fast computation of inverse continuous time variant functions with the proper use of Radial Basis Function Networks (RBFNs). The approach is based on implementing RBFNs for computing inverse continuous time variant functions via an overall damped least squares solution that includes a novel null space vector for singularities prevention. The singularities avoidance null space vector is derived from developing a sufficiency condition for singularities prevention that conduces to establish some characterizing matrices and an associated performance index.
A Crisis Management Approach To Mission Survivability In Computational Multi-Agent Systems
Directory of Open Access Journals (Sweden)
Aleksander Byrski
2010-01-01
Full Text Available In this paper we present a biologically-inspired approach for mission survivability (consideredas the capability of fulfilling a task such as computation that allows the system to be aware ofthe possible threats or crises that may arise. This approach uses the notion of resources usedby living organisms to control their populations.We present the concept of energetic selectionin agent-based evolutionary systems as well as the means to manipulate the configuration ofthe computation according to the crises or user’s specific demands.
Computational approaches to Poisson traces associated to finite subgroups of Sp(2n,C)
Etingof, Pavel; Pacchiano, Aldo; Ren, Qingchun; Schedler, Travis
2011-01-01
We reduce the computation of Poisson traces on quotients of symplectic vector spaces by finite subgroups of symplectic automorphisms to a finite one, by proving several results which bound the degrees of such traces as well as the dimension in each degree. This applies more generally to traces on all polynomial functions which are invariant under invariant Hamiltonian flow. We implement these approaches by computer together with direct computation for infinite families of groups, focusing on complex reflection and abelian subgroups of GL(2,C) < Sp(4,C), Coxeter groups of rank <= 3 and A_4, B_4=C_4, and D_4, and subgroups of SL(2,C).
Anharmonic-potential-effective-charge approach for computing Raman cross sections of a gas
Kutteh, Ramzi; van Zandt, L. L.
1993-05-01
An anharmonic-potential-effective-charge approach for computing relative Raman intensities of a gas is developed. The equations of motion are set up and solved for the driven anharmonic molecular vibrations. An explicit expression for the differential polarizability tensor is derived and its properties discussed. This expression is then used within the context of Placzek's theory [Handbuch der Radiologie (Akademische Verlagsgesellschaft, Leipzig, 1934), Vol. VI] to compute the Raman cross section and depolarization ratio of a gas. The computation is carried out for the small molecules CO2, CS2, SO2, and CCl4; results are compared with experimental measurements and discussed.
Wang, Chu-Fu; Lin, Chih-Lung; Deng, Jien-Han
2012-01-01
Testing is an important stage of teaching as it can assist teachers in auditing students' learning results. A good test is able to accurately reflect the capability of a learner. Nowadays, Computer-Assisted Testing (CAT) is greatly improving traditional testing, since computers can automatically and quickly compose a proper test sheet to meet user…
A Computer-Aided FPS-Oriented Approach for Construction Briefing
Institute of Scientific and Technical Information of China (English)
Xiaochun Luo; Qiping Shen
2008-01-01
Function performance specification (FPS) is one of the value management (VM) techniques de- veloped for the explicit statement of optimum product definition. This technique is widely used in software engineering and manufacturing industry, and proved to be successful to perform product defining tasks. This paper describes an FPS-odented approach for construction briefing, which is critical to the successful deliv- ery of construction projects. Three techniques, i.e., function analysis system technique, shared space, and computer-aided toolkit, are incorporated into the proposed approach. A computer-aided toolkit is developed to facilitate the implementation of FPS in the briefing processes. This approach can facilitate systematic, ef- ficient identification, clarification, and representation of client requirements in trail running. The limitations of the approach and future research work are also discussed at the end of the paper.
Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie
2016-01-01
Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits.
An approach to experimental evaluation of real-time fault-tolerant distributed computing schemes
Kim, K. H.
1989-01-01
A testbed-based approach to the evaluation of fault-tolerant distributed computing schemes is discussed. The approach is based on experimental incorporation of system structuring and design techniques into real-time distributed-computing testbeds centered around tightly coupled microcomputer networks. The effectiveness of this approach has been experimentally confirmed. Primary advantages of this approach include the accuracy of the timing and logical-complexity data and the degree of assurance of the practical effectiveness of the scheme evaluated. Various design issues encountered in the course of establishing the network testbed facilities are discussed, along with their augmentation to support some experiments. The shortcomings of the testbeds are also discussed together with the desired extensions of the testbeds.
Carl Aberg, Kristoffer; Doell, Kimberly C.; Schwartz, Sophie
2016-01-01
Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits. PMID:27851807
Wenger, Etienne
2014-01-01
Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge focuses on the cognitive approaches, methodologies, principles, and concepts involved in the communication of knowledge. The publication first elaborates on knowledge communication systems, basic issues, and tutorial dialogues. Concerns cover natural reasoning and tutorial dialogues, shift from local strategies to multiple mental models, domain knowledge, pedagogical knowledge, implicit versus explicit encoding of knowledge, knowledge communication, and practical and theoretic
Carl Aberg, Kristoffer; Doell, Kimberly C.; Schwartz, Sophie
2016-01-01
Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedb...
Directory of Open Access Journals (Sweden)
Sinem Oktem-Okullu
Full Text Available The outcome of H. pylori infection is closely related with bacteria's virulence factors and host immune response. The association between T cells and H. pylori infection has been identified, but the effects of the nine major H. pylori specific virulence factors; cagA, vacA, oipA, babA, hpaA, napA, dupA, ureA, ureB on T cell response in H. pylori infected patients have not been fully elucidated. We developed a multiplex- PCR assay to detect nine H. pylori virulence genes with in a three PCR reactions. Also, the expression levels of Th1, Th17 and Treg cell specific cytokines and transcription factors were detected by using qRT-PCR assays. Furthermore, a novel expert derived model is developed to identify set of factors and rules that can distinguish the ulcer patients from gastritis patients. Within all virulence factors that we tested, we identified a correlation between the presence of napA virulence gene and ulcer disease as a first data. Additionally, a positive correlation between the H. pylori dupA virulence factor and IFN-γ, and H. pylori babA virulence factor and IL-17 was detected in gastritis and ulcer patients respectively. By using computer-based models, clinical outcomes of a patients infected with H. pylori can be predicted by screening the patient's H. pylori vacA m1/m2, ureA and cagA status and IFN-γ (Th1, IL-17 (Th17, and FOXP3 (Treg expression levels. Herein, we report, for the first time, the relationship between H. pylori virulence factors and host immune responses for diagnostic prediction of gastric diseases using computer-based models.
Belge, Bénédicte; Coche, Emmanuel; Pasquet, Agnès; Vanoverschelde, Jean-Louis J; Gerber, Bernhard L
2006-07-01
Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134+/-51 and 67+/-56 ml) were similar to those by MR (137+/-57 and 70+/-60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55+/-21 vs. 56+/-21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3+/-1.8 vs. 8.8+/-1.9 mm and 12.7+/-3.4 vs. 13.3+/-3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54+/-30 vs. 51+/-31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR.
Energy Technology Data Exchange (ETDEWEB)
Belge, Benedicte; Pasquet, Agnes; Vanoverschelde, Jean-Louis J. [Universite Catholique de Louvain, Division of Cardiology, Brussels (Belgium); Coche, Emmanuel [Universite Catholique de Louvain, Division of Radiology, Brussels (Belgium); Gerber, Bernhard L. [Universite Catholique de Louvain, Division of Cardiology, Brussels (Belgium); Cliniques Universitaires St. Luc UCL, Department of Cardiology, Woluwe St. Lambert (Belgium)
2006-07-15
Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134{+-}51 and 67{+-}56 ml) were similar to those by MR (137{+-}57 and 70{+-}60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55{+-}21 vs. 56{+-}21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3{+-}1.8 vs. 8.8{+-}1.9 mm and 12.7{+-}3.4 vs. 13.3{+-}3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54{+-}30 vs. 51{+-}31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR. (orig.)
Lan, Yu; Zou, Lufeng; Cao, Yang; Houk, K N
2011-12-01
Theoretical calculations were performed on the 1,3-dipolar cycloaddition reactions of 24 1,3-dipoles with ethylene and acetylene. The 24 1,3-dipoles are of the formula X≡Y(+)-Z(-) (where X is HC or N, Y is N, and Z is CH(2), NH, or O) or X═Y(+)-Z(-) (where X and Z are CH(2), NH, or O and Y is NH, O, or S). The high-accuracy G3B3 method was employed as the reference. CBS-QB3, CCSD(T)//B3LYP, SCS-MP2//B3LYP, B3LYP, M06-2X, and B97-D methods were benchmarked to assess their accuracies and to determine an accurate method that is practical for large systems. Several basis sets were also evaluated. Compared to the G3B3 method, CBS-QB3 and CCSD(T)/maug-cc-pV(T+d)Z//B3LYP methods give similar results for both activation and reaction enthalpies (mean average deviation, MAD, < 1.5 kcal/mol). SCS-MP2//B3LYP and M06-2X give small errors for the activation enthalpies (MAD < 1.5 kcal/mol), while B3LYP has MAD = 2.3 kcal/mol. SCS-MP2//B3LYP and B3LYP give the reasonable reaction enthalpies (MAD < 5.0 kcal/mol). The B3LYP functional also gives good results for most 1,3-dipoles (MAD = 1.9 kcal/mol for 17 common 1,3-dipoles), but the activation and reaction enthalpies for ozone and sulfur dioxide are difficult to calculate by any of the density functional methods.
Directory of Open Access Journals (Sweden)
Chuan Jiang
2013-01-01
Full Text Available Condition-based monitoring (CBM has advanced to the stage where industry is now demanding machinery that possesses self-diagnosis ability. This need has spurred the CBM research to be applicable in more expanded areas over the past decades. There are two critical issues in implementing CBM in harsh environments using embedded systems: computational efficiency and adaptability. In this paper, a computationally efficient and adaptive approach including simple principal component analysis (SPCA for feature dimensionality reduction and K-means clustering for classification is proposed for online embedded machinery diagnosis. Compared with the standard principal component analysis (PCA and kernel principal component analysis (KPCA, SPCA is adaptive in nature and has lower algorithm complexity when dealing with a large amount of data. The effectiveness of the proposed approach is firstly validated using a standard rolling element bearing test dataset on a personal computer. It is then deployed on an embedded real-time controller and used to monitor a rotating shaft. It was found that the proposed approach scaled well, whereas the standard PCA-based approach broke down when data quantity increased to a certain level. Furthermore, the proposed approach achieved 90% accuracy when diagnosing an induced fault compared to 59% accuracy obtained using the standard PCA-based approach.
Fazanaro, Filipe I.; Soriano, Diogo C.; Suyama, Ricardo; Madrid, Marconi K.; Oliveira, José Raimundo de; Muñoz, Ignacio Bravo; Attux, Romis
2016-08-01
The characterization of nonlinear dynamical systems and their attractors in terms of invariant measures, basins of attractions and the structure of their vector fields usually outlines a task strongly related to the underlying computational cost. In this work, the practical aspects related to the use of parallel computing - specially the use of Graphics Processing Units (GPUS) and of the Compute Unified Device Architecture (CUDA) - are reviewed and discussed in the context of nonlinear dynamical systems characterization. In this work such characterization is performed by obtaining both local and global Lyapunov exponents for the classical forced Duffing oscillator. The local divergence measure was employed by the computation of the Lagrangian Coherent Structures (LCSS), revealing the general organization of the flow according to the obtained separatrices, while the global Lyapunov exponents were used to characterize the attractors obtained under one or more bifurcation parameters. These simulation sets also illustrate the required computation time and speedup gains provided by different parallel computing strategies, justifying the employment and the relevance of GPUS and CUDA in such extensive numerical approach. Finally, more than simply providing an overview supported by a representative set of simulations, this work also aims to be a unified introduction to the use of the mentioned parallel computing tools in the context of nonlinear dynamical systems, providing codes and examples to be executed in MATLAB and using the CUDA environment, something that is usually fragmented in different scientific communities and restricted to specialists on parallel computing strategies.
Fukuda, Ikuo; Kamiya, Narutoshi; Yonezawa, Yasushige; Nakamura, Haruki
2012-08-01
The zero-dipole summation method was extended to general molecular systems, and then applied to molecular dynamics simulations of an isotropic water system. In our previous paper [I. Fukuda, Y. Yonezawa, and H. Nakamura, J. Chem. Phys. 134, 164107 (2011), 10.1063/1.3582791], for evaluating the electrostatic energy of a classical particle system, we proposed the zero-dipole summation method, which conceptually prevents the nonzero-charge and nonzero-dipole states artificially generated by a simple cutoff truncation. Here, we consider the application of this scheme to molecular systems, as well as some fundamental aspects of general cutoff truncation protocols. Introducing an idea to harmonize the bonding interactions and the electrostatic interactions in the scheme, we develop a specific algorithm. As in the previous study, the resulting energy formula is represented by a simple pairwise function sum, enabling facile applications to high-performance computation. The accuracy of the electrostatic energies calculated by the zero-dipole summation method with the atom-based cutoff was numerically investigated, by comparison with those generated by the Ewald method. We obtained an electrostatic energy error of less than 0.01% at a cutoff length longer than 13 Å for a TIP3P isotropic water system, and the errors were quite small, as compared to those obtained by conventional truncation methods. The static property and the stability in an MD simulation were also satisfactory. In addition, the dielectric constants and the distance-dependent Kirkwood factors were measured, and their coincidences with those calculated by the particle mesh Ewald method were confirmed, although such coincidences are not easily attained by truncation methods. We found that the zero damping-factor gave the best results in a practical cutoff distance region. In fact, in contrast to the zero-charge scheme, the damping effect was insensitive in the zero-charge and zero-dipole scheme, in the
A principled approach to distributed multiscale computing, from formalization to execution
Borgdorff, J.; Falcone, J.-L.; Lorenz, E.; Chopard, B.; Hoekstra, A.G.
2011-01-01
In several disciplines, a multiscale approach is being used to model complex natural processes yet a principled background to multiscale modeling is not clear. Additionally, some multiscale models requiring distributed resources to be computed in an acceptable timeframe, while no standard framework
Advanced approaches to characterize the human intestinal microbiota by computational meta-analysis
Nikkilä, J.; Vos, de W.M.
2010-01-01
GOALS: We describe advanced approaches for the computational meta-analysis of a collection of independent studies, including over 1000 phylogenetic array datasets, as a means to characterize the variability of human intestinal microbiota. BACKGROUND: The human intestinal microbiota is a complex micr
A Computer-Assisted Multiliteracies Programme as an Alternative Approach to EFL Instruction
Lee, Kyungmee; Ardeshiri, Minoo; Cummins, Jim
2016-01-01
The aim of this article is to introduce a computer-assisted multiliteracies programme (CaMP) as an alternative approach to English as a Foreign Language (EFL) instruction in order to overcome the educational limitations that are inherent in most EFL settings. In a number of monolingual societies with a dominant language other than English,…
The Effects of Computer Supported Problem Based Learning on Students' Approaches to Learning
Ak, Serife
2011-01-01
The purpose of this paper is to investigate the effects of computer supported problem based learning on students' approaches to learning. The research was conducted as a pre-test and posttest one-grouped design used to achieve the objectives of the study. The experimental process of study lasted 5 weeks and was carried out on 78 university…
WPS mediation: An approach to process geospatial data on different computing backends
Giuliani, Gregory; Nativi, Stefano; Lehmann, Anthony; Ray, Nicolas
2012-10-01
The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances.
A Two Layer Approach to the Computability and Complexity of Real Functions
DEFF Research Database (Denmark)
Lambov, Branimir Zdravkov
2003-01-01
We present a new model for computability and complexity of real functions together with an implementation that it based on it. The model uses a two-layer approach in which low-type basic objects perform the computation of a real function, but, whenever needed, can be complemented with higher type...... characterizing functions. A similar discrimination is implemented in the presented real number package, which operates on both an approximation layer and a layer which operates on real numbers as complete entities. This approach allows the model to provide correctness and completeness to the established notions...... in computable analysis, while the efficiency of the implementation is not compromised by the need to create and maintain higher-type objects....
Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.
1985-01-01
Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commerically available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.
Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.
1985-01-01
Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commercially available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.
Carelli, Fabio; Gianturco, Francesco Antonio
2016-06-01
The interaction of low-energy scattering electrons/positrons with molecular targets characterized by a "supercritical" permanent dipole moment (≳2.0 D) presents special physical characteristics that affect their spatial distributions, around the nuclear network of the molecular partners, both above and below the energy thresholds. Such special states are described as either dipole scattering states (DSS) above thresholds or as dipole bound states (DBS) below thresholds. The details of their respective behaviour will be presented and discussed in this work in the case of the purinic DNA bases of adenine and guanine. The behavior of the additional electron, in particular, will be discussed in detail by providing new computational results that will be related to the findings from recent experiments on the same DNA bases, confirming the transient electron's behaviour surmised by them. This work is affectionately dedicated to Michael Allan on the occasion of his official retirement. We wish to this dear friend and outstanding scientist many years to come in the happy pursuit of his many scientific interests.Contribution to the Topical Issue "Advances in Positron and Electron Scattering", edited by Paulo Limao-Vieira, Gustavo Garcia, E. Krishnakumar, James Sullivan, Hajime Tanuma and Zoran Petrovic.
A comparative approach to computer aided design model of a dog femur.
Turamanlar, O; Verim, O; Karabulut, A
2016-01-01
Computer assisted technologies offer new opportunities in medical imaging and rapid prototyping in biomechanical engineering. Three dimensional (3D) modelling of soft tissues and bones are becoming more important. The accuracy of the analysis in modelling processes depends on the outline of the tissues derived from medical images. The aim of this study is the evaluation of the accuracy of 3D models of a dog femur derived from computed tomography data by using point cloud method and boundary line method on several modelling software. Solidworks, Rapidform and 3DSMax software were used to create 3D models and outcomes were evaluated statistically. The most accurate 3D prototype of the dog femur was created with stereolithography method using rapid prototype device. Furthermore, the linearity of the volumes of models was investigated between software and the constructed models. The difference between the software and real models manifests the sensitivity of the software and the devices used in this manner.
Directory of Open Access Journals (Sweden)
Cristian Toma
2013-01-01
Full Text Available This study presents wavelets-computational aspects of Sterian-realistic approach to uncertainty principle in high energy physics. According to this approach, one cannot make a device for the simultaneous measuring of the canonical conjugate variables in reciprocal Fourier spaces. However, such aspects regarding the use of conjugate Fourier spaces can be also noticed in quantum field theory, where the position representation of a quantum wave is replaced by momentum representation before computing the interaction in a certain point of space, at a certain moment of time. For this reason, certain properties regarding the switch from one representation to another in these conjugate Fourier spaces should be established. It is shown that the best results can be obtained using wavelets aspects and support macroscopic functions for computing (i wave-train nonlinear relativistic transformation, (ii reflection/refraction with a constant shift, (iii diffraction considered as interaction with a null phase shift without annihilation of associated wave, (iv deflection by external electromagnetic fields without phase loss, and (v annihilation of associated wave-train through fast and spatially extended phenomena according to uncertainty principle.
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning
Roth, John; Tummala, Murali; McEachen, John
2016-09-01
This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.
Bayesian approaches to spatial inference: Modelling and computational challenges and solutions
Moores, Matthew; Mengersen, Kerrie
2014-12-01
We discuss a range of Bayesian modelling approaches for spatial data and investigate some of the associated computational challenges. This paper commences with a brief review of Bayesian mixture models and Markov random fields, with enabling computational algorithms including Markov chain Monte Carlo (MCMC) and integrated nested Laplace approximation (INLA). Following this, we focus on the Potts model as a canonical approach, and discuss the challenge of estimating the inverse temperature parameter that controls the degree of spatial smoothing. We compare three approaches to addressing the doubly intractable nature of the likelihood, namely pseudo-likelihood, path sampling and the exchange algorithm. These techniques are applied to satellite data used to analyse water quality in the Great Barrier Reef.
Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C
Directory of Open Access Journals (Sweden)
Vladimiras Dolgopolovas
2015-01-01
Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.
Frisby, Joshua; Smith, Vernon; Traub, Stephen; Patel, Vimla L
2017-01-01
Hospital Emergency Departments (EDs) frequently experience crowding. One of the factors that contributes to this crowding is the "door to doctor time", which is the time from a patient's registration to when the patient is first seen by a physician. This is also one of the Meaningful Use (MU) performance measures that emergency departments report to the Center for Medicare and Medicaid Services (CMS). Current documentation methods for this measure are inaccurate due to the imprecision in manual data collection. We describe a method for automatically (in real time) and more accurately documenting the door to physician time. Using sensor-based technology, the distance between the physician and the computer is calculated by using the single board computers installed in patient rooms that log each time a Bluetooth signal is seen from a device that the physicians carry. This distance is compared automatically with the accepted room radius to determine if the physicians are present in the room at the time logged to provide greater precision. The logged times, accurate to the second, were compared with physicians' handwritten times, showing automatic recordings to be more precise. This real time automatic method will free the physician from extra cognitive load of manually recording data. This method for evaluation of performance is generic and can be used in any other setting outside the ED, and for purposes other than measuring physician time.
Directory of Open Access Journals (Sweden)
Şeyda Gül
2011-06-01
Full Text Available The aim of this study is to determine the effect of computer assisted instruction based constructivist learning approach on students’ attitudes towards computers and science and technology lesson and their achievements at science and technology lesson. The study group is a group of 56 students who attend to fourth grade of a public primary school which were selected via convenient sampling method from Körfez (Kocaeli. The data were collected by means of Attitude Scale to Science and Technology Lesson, Attitude Scale to Computers and Achievement Test. In this research, a quasi-experiment design with pre test-post test control group was employed. The subjects was taught to the students using constructivist learning method which carried out at present syllabus in control group and computer assisted instruction based constructivist learning approach in experimental group. The findings from this study showed that there was a statistically significant difference between groups’ post-test attitudes towards computers and post-test scores obtained from achievement test in favour of experimental group (p0.05 despite the fact that there was a positive increase at experimental groups’ attitudes.
Accurate complex scaling of three dimensional numerical potentials.
Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan; Deutsch, Thierry
2013-05-28
The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schrödinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scaling of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.
A new approach based on PSO algorithm to find good computational encoding sequences
Institute of Scientific and Technical Information of China (English)
Cui Guangzhao; Niu Yunyun; Wang Yanfeng; Zhang Xuncai; Pan Linqiang
2007-01-01
Computational encoding DNA sequence design is one of the most important steps in molecular computation. A lot of research work has been done to design reliable sequence library. A revised method based on the support system developed by Tanaka et al.is proposed here with different criteria to construct fitness function. Then we adapt particle swarm optimization (PSO) algorithm to our encoding problem. By using the new algorithm, a set of sequences with good quality is generated. The result also shows that our PSO- based approach could rapidly converge at the minimum level for an output of the simulation model. The celerity of the algorithm fits our requirements.
An Approach to Computer Modeling of Geological Faults in 3D and an Application
Institute of Scientific and Technical Information of China (English)
ZHU Liang-feng; HE Zheng; PAN Xin; WU Xin-cai
2006-01-01
3D geological modeling, one of the most important applications in geosciences of 3D GIS, forms the basis and is a prerequisite for visualized representation and analysis of 3D geological data. Computer modeling of geological faults in 3D is currently a topical research area. Structural modeling techniques of complex geological entities containing reverse faults are discussed and a series of approaches are proposed. The geological concepts involved in computer modeling and visualization of geological fault in 3D are explained, the type of data of geological faults based on geological exploration is analyzed, and a normative database format for geological faults is designed. Two kinds of modeling approaches for faults are compared: a modeling technique of faults based on stratum recovery and a modeling technique of faults based on interpolation in subareas. A novel approach, called the Unified Modeling Technique for stratum and fault, is presented to solve the puzzling problems of reverse faults, syn-sedimentary faults and faults terminated within geological models. A case study of a fault model of bed rock in the Beijing Olympic Green District is presented in order to show the practical result of this method. The principle and the process of computer modeling of geological faults in 3D are discussed and a series of applied technical proposals established. It strengthens our profound comprehension of geological phenomena and the modeling approach, and establishes the basic techniques of 3D geological modeling for practical applications in the field of geosciences.
Directory of Open Access Journals (Sweden)
Wilianto Wilianto
2015-10-01
Full Text Available This work discusses the development of information technology service management using cloud computing approach to improve the performance of administration system and online learning at STMIK IBBI Medan, Indonesia. The network topology is modeled and simulated for system administration and online learning. The same network topology is developed in cloud computing using Amazon AWS architecture. The model is designed and modeled using Riverbed Academic Edition Modeler to obtain values of the parameters: delay, load, CPU utilization, and throughput. The simu- lation results are the following. For network topology 1, without cloud computing, the average delay is 54 ms, load 110 000 bits/s, CPU utilization 1.1%, and throughput 440 bits/s. With cloud computing, the average delay is 45 ms, load 2 800 bits/s, CPU utilization 0.03%, and throughput 540 bits/s. For network topology 2, without cloud computing, the average delay is 39 ms, load 3 500 bits/s, CPU utilization 0.02%, and throughput database server 1 400 bits/s. With cloud computing, the average delay is 26 ms, load 5 400 bits/s, CPU utilization email server 0.0001%, FTP server 0.001%, HTTP server 0.0002%, throughput email server 85 bits/s, FTP server 100 bits/sec, and HTTP server 95 bits/s. Thus, the delay, the load, and the CPU utilization decrease; but, the throughput increases. Information technology service management with cloud computing approach has better performance.
An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon
Rutherford, Brian
2000-01-01
The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected
Energy Technology Data Exchange (ETDEWEB)
Kurdziel, J.C.; Dondelinger, R.F.; Hemmer, M.
1987-01-01
107 polytraumatized patients, who had experienced blunt trauma have been worked up at admission with computed tomography of the thorax, abdomen and pelvis following computed tomography study of the brain: significant lesions were revealed in 98 (90%) patients. 79 (74%) patients showed trauma to the thorax, in 69 (64%) patients abdominal or pelvic trauma was evidenced. No false positive diagnosis was established. 5 traumatic findings were missed. Emergency angiography was indicated in 3 (3%) patients, following computed tomography examination. 3 other trauma patients were submitted directly to angiography without computed tomography examination during the time period this study was completed. Embolization was carried out in 5/6 patients. No thoracotomy was needed. 13 (12%) patients underwent laparotomy following computed tomography. Overall mortality during hospital stay was 14% (15/107). No patient died from visceral bleeding. Conservative management of blunt polytrauma patients can be advocated in almost 90% of visceral lesions. Computed tomography coupled with angiography and embolization represent an adequate integrated approach to the management of blunt polytrauma patients.
Computer-aided approach for customized cell-based defect reconstruction.
Meyer, Ulrich; Neunzehn, Jörg; Wiesmann, Hans Peter
2012-01-01
Computer-aided technologies like computer-aided design (CAD), computer-aided manufacturing (CAM), and a lot of other features like finite element method (FEM) have been recently employed for use in medical ways like in extracorporeal bone tissue engineering strategies. Aim of this pilot experimental study was to test whether autologous osteoblast-like cells cultured in vitro on individualized scaffolds can be used to support bone regeneration in a clinical environment. Mandibular bone defects were surgically introduced into the mandibles of Göttinger minipigs and the scaffold of the defect site was modelled by CAD/CAM techniques. From the minipigs harvested autologous bone cells from the porcine calvaria were cultivated in bioreactors. The cultured osteoblast-like cells were seeded on polylactic acid/polyglycolic acid (PLA/PGA) copolymer scaffolds being generated by rapid prototyping. The bone defects were then reconstructed by implanting these tissue-constructs into bone defects. The postoperative computerized topographic scans as well as the intraoperative sites demonstrated the accurate fit in the defect sites. The individual created, implanted scaffold constructs enriched with the porcine osteoblast-like cells were well tolerated and appeared to support bone formation, as revealed by immunohistochemical and histological analyses. The results of this investigations indicated that the in vitro expanded osteoblast-like cells spread on a resorbable individualized, computer-aided fabricated scaffold is capable of promoting the repair of bone tissue defects in vivo. The shown results warrant further attempts to combine computer modelling and tissue engineering for use in different ways in bone reconstructive surgery.
Chakraborty, Partha Sarathi; Karunanithi, Sellam; Dhull, Varun Singh; Kumar, Kunal; Tripathi, Madhavi
2015-01-01
We present the case of a 35-year-old man with calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly and telangiectasia variant scleroderma who presented with dysphagia, Raynaud's phenomenon and calf pain. (99m)Tc-methylene diphosphonate bone scintigraphy was performed to identify the extent of the calcification. It revealed extensive dystrophic calcification in the left thigh and bilateral legs which was involving the muscles and was well-delineated on single photon emission computed tomography/computed tomography. Calcinosis in scleroderma usually involves the skin but can be found in deeper periarticular tissues. Myopathy is associated with a poor prognosis.
Adeshina, A M; Hashim, R
2017-03-01
Diagnostic radiology is a core and integral part of modern medicine, paving ways for the primary care physicians in the disease diagnoses, treatments and therapy managements. Obviously, all recent standard healthcare procedures have immensely benefitted from the contemporary information technology revolutions, apparently revolutionizing those approaches to acquiring, storing and sharing of diagnostic data for efficient and timely diagnosis of diseases. Connected health network was introduced as an alternative to the ageing traditional concept in healthcare system, improving hospital-physician connectivity and clinical collaborations. Undoubtedly, the modern medicinal approach has drastically improved healthcare but at the expense of high computational cost and possible breach of diagnosis privacy. Consequently, a number of cryptographical techniques are recently being applied to clinical applications, but the challenges of not being able to successfully encrypt both the image and the textual data persist. Furthermore, processing time of encryption-decryption of medical datasets, within a considerable lower computational cost without jeopardizing the required security strength of the encryption algorithm, still remains as an outstanding issue. This study proposes a secured radiology-diagnostic data framework for connected health network using high-performance GPU-accelerated Advanced Encryption Standard. The study was evaluated with radiology image datasets consisting of brain MR and CT datasets obtained from the department of Surgery, University of North Carolina, USA, and the Swedish National Infrastructure for Computing. Sample patients' notes from the University of North Carolina, School of medicine at Chapel Hill were also used to evaluate the framework for its strength in encrypting-decrypting textual data in the form of medical report. Significantly, the framework is not only able to accurately encrypt and decrypt medical image datasets, but it also
Tamma, Kumar K.; Railkar, Sudhir B.
1987-01-01
The present paper describes the development of a new hybrid computational approach for applicability for nonlinear/linear thermal structural analysis. The proposed transfinite element approach is a hybrid scheme as it combines the modeling versatility of contemporary finite elements in conjunction with transform methods and the classical Bubnov-Galerkin schemes. Applicability of the proposed formulations for nonlinear analysis is also developed. Several test cases are presented to include nonlinear/linear unified thermal-stress and thermal-stress wave propagations. Comparative results validate the fundamental capablities of the proposed hybrid transfinite element methodology.
Computational approaches towards understanding human long non-coding RNA biology.
Jalali, Saakshi; Kapoor, Shruti; Sivadas, Ambily; Bhartiya, Deeksha; Scaria, Vinod
2015-07-15
Long non-coding RNAs (lncRNAs) form the largest class of non-protein coding genes in the human genome. While a small subset of well-characterized lncRNAs has demonstrated their significant role in diverse biological functions like chromatin modifications, post-transcriptional regulation, imprinting etc., the functional significance of a vast majority of them still remains an enigma. Increasing evidence of the implications of lncRNAs in various diseases including cancer and major developmental processes has further enhanced the need to gain mechanistic insights into the lncRNA functions. Here, we present a comprehensive review of the various computational approaches and tools available for the identification and annotation of long non-coding RNAs. We also discuss a conceptual roadmap to systematically explore the functional properties of the lncRNAs using computational approaches.
Integrating structure-based and ligand-based approaches for computational drug design.
Wilson, Gregory L; Lill, Markus A
2011-04-01
Methods utilized in computer-aided drug design can be classified into two major categories: structure based and ligand based, using information on the structure of the protein or on the biological and physicochemical properties of bound ligands, respectively. In recent years there has been a trend towards integrating these two methods in order to enhance the reliability and efficiency of computer-aided drug-design approaches by combining information from both the ligand and the protein. This trend resulted in a variety of methods that include: pseudoreceptor methods, pharmacophore methods, fingerprint methods and approaches integrating docking with similarity-based methods. In this article, we will describe the concepts behind each method and selected applications.
A Review of Intrusion Detection Technique by Soft Computing and Data Mining Approach
Directory of Open Access Journals (Sweden)
Aditya Shrivastava
2013-09-01
Full Text Available The growth of internet technology spread a large amount of data communication. The communication of data compromised network threats and security issues. The network threats and security issues raised a problem of data integrity and loss of data. For the purpose of data integrity and loss of data before 20 year Anderson developed a model of intrusion detection system. Initially intrusion detection system work on process of satirical frequency of audit system logs. Latter on this system improved by various researchers and apply some other approach such as data mining technique, neural network and expert system. Now in current research trend of intrusion detection system used soft computing approach such as fuzzy logic, genetic algorithm and machine learning. In this paper discuss some method of data mining and soft computing for the purpose of intrusion detection. Here used KDDCUP99 dataset used for performance evaluation for this technique.
A Computational Approach for Analyzing and Detecting Emotions in Arabic Text
Directory of Open Access Journals (Sweden)
Amira F. El Gohary, Torky I. Sultan, Maha A. Hana, Mohamed M. El Dosoky
2013-05-01
Full Text Available The field of Affective Computing (AC expects to narrow the communicative gap between the highly emotional human and the emotionally challenged computer by developing computational systems that recognize and respond to the affective states of the user. Affect-sensitive interfaces are being developed in number of domains, including gaming, mental health, and learning technologies. Emotions are part of human life. Recently, interest has been growing among researchers to find ways of detecting subjective information used in blogs and other online social media. This paper concerned with the automatic detection of emotions in Arabic text. This construction is based on a moderate sized Arabic emotion lexicon used to annotate Arabic children stories for the six basic emotions: Joy, Fear, Sadness, Anger, Disgust, and Surprise.Our approach achieves 65% accuracy for emotion detection in Arabic text.
Design-based approach to ethics in computer-aided diagnosis
Collmann, Jeff R.; Lin, Jyh-Shyan; Freedman, Matthew T.; Wu, Chris Y.; Hayes, Wendelin S.; Mun, Seong K.
1996-04-01
A design-based approach to ethical analysis examines how computer scientists, physicians and patients make and justify choices in designing, using and reacting to computer-aided diagnosis (CADx) systems. The basic hypothesis of this research is that values are embedded in CADx systems during all phases of their development, not just retrospectively imposed on them. This paper concentrates on the work of computer scientists and physicians as they attempt to resolve central technical questions in designing clinically functional CADx systems for lung cancer and breast cancer diagnosis. The work of Lo, Chan, Freedman, Lin, Wu and their colleagues provides the initial data on which this study is based. As these researchers seek to increase the rate of true positive classifications of detected abnormalities in chest radiographs and mammograms, they explore dimensions of the fundamental ethical principal of beneficence. The training of CADx systems demonstrates the key ethical dilemmas inherent in their current design.
A Bayesian Approach for Parameter Estimation and Prediction using a Computationally Intensive Model
Higdon, Dave; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M
2014-01-01
Bayesian methods have been very successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When non-linearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and non-standard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we pr...
Hsu, Ting-Chia
2016-01-01
In this study, a peer assessment system using the grid-based knowledge classification approach was developed to improve students' performance during computer skills training. To evaluate the effectiveness of the proposed approach, an experiment was conducted in a computer skills certification course. The participants were divided into three…
Hsu, Ching-Kun; Hwang, Gwo-Jen
2014-01-01
Personal computer assembly courses have been recognized as being essential in helping students understand computer structure as well as the functionality of each computer component. In this study, a context-aware ubiquitous learning approach is proposed for providing instant assistance to individual students in the learning activity of a…
Inouye, Joshua M.; Kutch, Jason J.; Valero-Cuevas, Francisco J.
2013-01-01
We propose a complete methodology to find the full set of feasible grasp wrenches and the corresponding wrench-direction-independent grasp quality for a tendon-driven hand with arbitrary design parameters. Monte Carlo simulations on two representative designs combined with multiple linear regression identified the parameters with the greatest potential to increase this grasp metric. This synthesis of computational approaches now enables the systematic design, evaluation, and optimization of tendon-driven hands. PMID:23335864
Foerster, A.; Leymann, H. A. M.; Wiersig, J.
2017-03-01
We introduce an equation of motion approach that allows for an approximate evaluation of the time evolution of a quantum system, where the algebraic work to derive the equations of motion is done by the computer. The introduced procedures offer a variety of different types of approximations applicable for finite systems with strong coupling as well as for arbitrary large systems where augmented mean-field theories like the cluster expansion can be applied.
Computational approaches for the study of the role of small molecules in diseases
Directory of Open Access Journals (Sweden)
Monica Campillos
2016-12-01
Full Text Available An enormous amount of molecular and phenotypic information of drugs as well as diseases is now available in public repositories. Computational analysis of these datasets is facilitating the acquisition of a systems view of how drugs act on our human organism and interfere with diseases. Here, I highlight recent approaches integrating large-scale information of drugs and diseases that are contributing to change our current view on how drugs interfere with human diseases.
Computational Classification Approach to Profile Neuron Subtypes from Brain Activity Mapping Data
Meng Li; Fang Zhao; Jason Lee; Dong Wang; Hui Kuang; Joe Z Tsien
2015-01-01
The analysis of cell type-specific activity patterns during behaviors is important for better understanding of how neural circuits generate cognition, but has not been well explored from in vivo neurophysiological datasets. Here, we describe a computational approach to uncover distinct cell subpopulations from in vivo neural spike datasets. This method, termed “inter-spike-interval classification-analysis” (ISICA), is comprised of four major steps: spike pattern feature-extraction, pre-cluste...
Lightweight Tactical Client: A Capability-Based Approach to Command Post Computing
2015-12-01
SUBTITLE LIGHTWEIGHT TACTICAL CLIENT: A CAPABILITY-BASED APPROACH TO COMMAND POST COMPUTING 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...operational standpoint. For example, a requirement of a Command Post Client (ref. 1) is the capability to operate for an extended period of time (48+ hr...operations during disconnected, intermittent, and latent states including fully disconnected operations Able to operate on a physically light
A. Askim GULUMBAY, Anadolu University, TURKEY
2006-01-01
This book was edited by, Maja Pivec, an educator at the University of Applied Sciences, and published by IOS Pres in 2006. The learning process can be seen as an emotional and personal experience that is addictive and leads learners to proactive behavior. New research methods in this field are related to affective and emotional approaches to computersupported learning and human-computer interactions.Bringing together scientists and research aspects from psychology, educational sciences, cogni...
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Directory of Open Access Journals (Sweden)
Muhammad Idris
Full Text Available Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR is a well known data-intensive distributed processing framework using the distributed file system (DFS for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack, a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Wang, Pan; Dang, Li; Zhu, Bao-Ting
2016-01-01
Estrogens have a whole host of physiological functions in many human organs and systems, including the reproductive, cardiovascular, and central nervous systems. Many naturally-occurring compounds with estrogenic or antiestrogenic activity are present in our environment and food sources. Synthetic estrogens and antiestrogens are also important therapeutic agents. At the molecular level, estrogen receptors (ERs) mediate most of the well-known actions of estrogens. Given recent advances in computational modeling tools, it is now highly practical to use these tools to study the interaction of human ERs with various types of ligands. There are two common categories of modeling techniques: one is the quantitative structure activity relationship (QSAR) analysis, which uses the structural information of the interacting ligands to predict the binding site properties of a macromolecule, and the other one is molecular docking-based computational analysis, which uses the 3-dimensional structural information of both the ligands and the receptor to predict the binding interaction. In this review, we discuss recent results that employed these and other related computational modeling approaches to characterize the binding interaction of various estrogens and antiestrogens with the human ERs. These examples clearly demonstrate that the computational modeling approaches, when used in combination with other experimental methods, are powerful tools that can precisely predict the binding interaction of various estrogenic ligands and their derivatives with the human ERs.
Institute of Scientific and Technical Information of China (English)
高文; 陈熙霖
1997-01-01
The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.
Institute of Scientific and Technical Information of China (English)
SU Chong; LI Hong-Guang
2013-01-01
Numerous multi-objective decision-making problems related to industrial process control engineering such as control and operation performance evaluation are being resolved through human-computer interactions.With regard to the problems that traditional interactive evolutionary computing approaches suffer i.e.,limited searching ability and human's strong subjectivity in multi-objective-attribute decision-making,a novel affective computing and learning solution adapted to human-computer interaction mechanism is explicitly proposed.Therein,a kind of stimulating response based affective computing model (STAM) is constructed,along with quantitative relations between affective space and human's subjective preferences.Thereafter,affective learning strategies based on genetic algorithms are introduced which are responsible for gradually grasping essentials in human's subjective judgments in decision-making,reducing human's subjective fatigue as well as making the decisions more objective and scientific.Affective learning algorithm's complexity and convergence analysis are shown in Appendices A and B.To exemplify applications of the proposed methods,ad-hoc test functions and PID parameter tuning are suggested as case studies,giving rise to satisfying results and showing validity of the contributions.
Jain, Tarun Kumar; Karunanithi, Sellam; Dhull, Varun Singh; Roy, Shambo Guha; Kumar, Rakesh
2014-04-01
(68)Ga-labelled [1, 4, 7, 10-tetraazacyclododecane-1, 4, 7, 10-tetraacetic acid]-1-NaI3-Octreotide ((68)Ga-DOTANOC) positron emission tomography/computed tomography (PET/CT) is an excellent modality in patients with carcinoma of unknown primary of neuroendocrine origin. Most of the primary lesions are located in mid gut region where the lesions have poor resolution due to undistended and overlapping intestinal loops and motility-related artifacts. Although PET/CT enteroclysis, enterography and colonography have been described with (18)F-fluorodeoxyglucose, PET/CT enterography with(68)Ga-DOTANOC has not been described in the literature. Here, we present a case where(68)Ga-DOTANOC PET/CT enterography was useful in identifying the primary neuroendocrine tumor lesion in small intestine with accurate delineation.
Duan, Wenbo; Kirby, Ray; Mudge, Peter; Gan, Tat-Hean
2016-12-01
Ultrasonic guided waves are often used in the detection of defects in oil and gas pipelines. It is common for these pipelines to be buried underground and this may restrict the length of the pipe that can be successfully tested. This is because acoustic energy travelling along the pipe walls may radiate out into the surrounding medium. Accordingly, it is important to develop a better understanding of the way in which elastic waves propagate along the walls of buried pipes, and so in this article a numerical model is developed that is suitable for computing the eigenmodes for uncoated and coated buried pipes. This is achieved by combining a one dimensional eigensolution based on the semi-analytic finite element (SAFE) method, with a perfectly matched layer (PML) for the infinite medium surrounding the pipe. This article also explores an alternative exponential complex coordinate stretching function for the PML in order to improve solution convergence. It is shown for buried pipelines that accurate solutions may be obtained over the entire frequency range typically used in long range ultrasonic testing (LRUT) using a PML layer with a thickness equal to the pipe wall thickness. This delivers a fast and computationally efficient method and it is shown for pipes buried in sand or soil that relevant eigenmodes can be computed and sorted in less than one second using relatively modest computer hardware. The method is also used to find eigenmodes for a buried pipe coated with the viscoelastic material bitumen. It was recently observed in the literature that a viscoelastic coating may effectively isolate particular eigenmodes so that energy does not radiate from these modes into the surrounding [elastic] medium. A similar effect is also observed in this article and it is shown that this occurs even for a relatively thin layer of bitumen, and when the shear impedance of the coating material is larger than that of the surrounding medium.
Computational Approaches for Mining GRO-Seq Data to Identify and Characterize Active Enhancers.
Nagari, Anusha; Murakami, Shino; Malladi, Venkat S; Kraus, W Lee
2017-01-01
Transcriptional enhancers are DNA regulatory elements that are bound by transcription factors and act to positively regulate the expression of nearby or distally located target genes. Enhancers have many features that have been discovered using genomic analyses. Recent studies have shown that active enhancers recruit RNA polymerase II (Pol II) and are transcribed, producing enhancer RNAs (eRNAs). GRO-seq, a method for identifying the location and orientation of all actively transcribing RNA polymerases across the genome, is a powerful approach for monitoring nascent enhancer transcription. Furthermore, the unique pattern of enhancer transcription can be used to identify enhancers in the absence of any information about the underlying transcription factors. Here, we describe the computational approaches required to identify and analyze active enhancers using GRO-seq data, including data pre-processing, alignment, and transcript calling. In addition, we describe protocols and computational pipelines for mining GRO-seq data to identify active enhancers, as well as known transcription factor binding sites that are transcribed. Furthermore, we discuss approaches for integrating GRO-seq-based enhancer data with other genomic data, including target gene expression and function. Finally, we describe molecular biology assays that can be used to confirm and explore further the function of enhancers that have been identified using genomic assays. Together, these approaches should allow the user to identify and explore the features and biological functions of new cell type-specific enhancers.
Directory of Open Access Journals (Sweden)
Sashi Tarun
2011-01-01
Full Text Available From the very beginning various measures are taken or consider for better utilization of available limited resources in the computer system for operational environment, this is came in consideration because most of the time our system get free and not able to exploit the system resource/capabilities as whole cause low performance. Parallel Computing can work efficiently, where operations are handled by multi-processors independently or efficiently, without any other processing capabilities. All processing unit’s works in a parallel fashioned and increases the system throughput without any resource allocation problem among different processing units. But this is limited and effective within a single machine. Today in this computing world, maintaining and establishing high speed computational work environment in a distributed scenario seems to be a challenging task because this environment made all operations by not depending on single resources but by interacting with otherresources in the vast network architecture. All current resource management system can only work smoothly if they apply these resources within their clusters, local organizations or disputed among many users who needs processing power, but for vast distributed environment performing various operational activities seems to be difficult because data is physically not maintained in a centralized location, it is geographically dispersed on multiple remote computers systems. Computers in the distributed environment have to depend on multiple resources for their task completion. Effective performance with high availability of resources to each computer in this speedy distributed computational environment is the major concern. To solve this problem a new approach is coined called “Grid Computing” environment. Grid uses a Middleware to coordinate disparate resources across a network, allows users to function as a virtual whole and make computing fast. In this paper I want to
DiLabio, Gino A; Torres, Edmanuel
2013-01-01
We recently showed that dispersion-correcting potentials (DCPs), atom-centered Gaussian-type functions developed for use with B3LYP (J. Phys. Chem. Lett. 2012, 3, 1738-1744) greatly improved the ability of the underlying functional to predict non-covalent interactions. However, the application of B3LYP-DCP for the {\\beta}-scission of the cumyloxyl radical led a calculated barrier height that was over-estimated by ca. 8 kcal/mol. We show in the present work that the source of this error arises from the previously developed carbon atom DCPs, which erroneously alters the electron density in the C-C covalent-bonding region. In this work, we present a new C-DCP with a form that was expected to influence the electron density farther from the nucleus. Tests of the new C-DCP, with previously published H-, N- and O-DCPs, with B3LYP-DCP/6-31+G(2d,2p) on the S66, S22B, HSG-A, and HC12 databases of non-covalently interacting dimers showed that it is one of the most accurate methods available for treating intermolecular i...
Interactive computer-assisted approach for evaluation of ultrastructural cilia abnormalities
Palm, Christoph; Siegmund, Heiko; Semmelmann, Matthias; Grafe, Claudia; Evert, Matthias; Schroeder, Josef A.
2016-03-01
Introduction - Diagnosis of abnormal cilia function is based on ultrastructural analysis of axoneme defects, especialy the features of inner and outer dynein arms which are the motors of ciliar motility. Sub-optimal biopsy material, methodical, and intrinsic electron microscopy factors pose difficulty in ciliary defects evaluation. We present a computer-assisted approach based on state-of-the-art image analysis and object recognition methods yielding a time-saving and efficient diagnosis of cilia dysfunction. Method - The presented approach is based on a pipeline of basal image processing methods like smoothing, thresholding and ellipse fitting. However, integration of application specific knowledge results in robust segmentations even in cases of image artifacts. The method is build hierarchically starting with the detection of cilia within the image, followed by the detection of nine doublets within each analyzable cilium, and ending with the detection of dynein arms of each doublet. The process is concluded by a rough classification of the dynein arms as basis for a computer-assisted diagnosis. Additionally, the interaction possibilities are designed in a way, that the results are still reproducible given the completion report. Results - A qualitative evaluation showed reasonable detection results for cilia, doublets and dynein arms. However, since a ground truth is missing, the variation of the computer-assisted diagnosis should be within the subjective bias of human diagnosticians. The results of a first quantitative evaluation with five human experts and six images with 12 analyzable cilia showed, that with default parameterization 91.6% of the cilia and 98% of the doublets were found. The computer-assisted approach rated 66% of those inner and outer dynein arms correct, where all human experts agree. However, especially the quality of the dynein arm classification may be improved in future work.
2017-01-01
The present study proposes an integrated computer-aided approach combining femur surface modeling, fracture evidence recover plate creation, and plate modification in order to conduct a parametric investigation of the design of custom plate for a specific patient. The study allows for improving the design efficiency of specific plates on the patients' femur parameters and the fracture information. Furthermore, the present approach will lead to exploration of plate modification and optimization. The three-dimensional (3D) surface model of a detailed femur and the corresponding fixation plate were represented with high-level feature parameters, and the shape of the specific plate was recursively modified in order to obtain the optimal plate for a specific patient. The proposed approach was tested and verified on a case study, and it could be helpful for orthopedic surgeons to design and modify the plate in order to fit the specific femur anatomy and the fracture information. PMID:28203270
Fleischhauer, Robert; Božić, Marko; Kaliske, Michael
2016-11-01
The paper introduces a novel approach to computational homogenization by bridging the scales from microscale to macroscale. Whenever the microstructure is in an equilibrium state, the macrostructure needs to be in equilibrium, too. The novel approach is based on the concept of representative volume elements, stating that an assemblage of representative elements should be able to resemble the macrostructure. The resulting key assumption is the continuity of the appropriate kinematic fields across both scales. This assumption motivates the following idea. In contrast to existing approaches, where mostly constitutive quantities are homogenized, the balance equations, that drive the considered field quantities, are homogenized. The approach is applied to the fully coupled partial differential equations of thermomechanics solved by the finite element (FE) method. A novel consistent finite homogenization element is given with respect to discretized residual formulations and linearization terms. The presented FE has no restrictions regarding the thermomechanical constitutive laws that are characterizing the microstructure. A first verification of the presented approach is carried out against semi-analytical and reference solutions within the range of one-dimensional small strain thermoelasticity. Further verification is obtained by a comparison to the classical FE^2 method and its different types of boundary conditions within a finite deformation setting of purely mechanical problems. Furthermore, the efficiency of the novel approach is investigated and compared. Finally, structural examples are shown in order to demonstrate the applicability of the presented homogenization framework in case of finite thermo-inelasticity at different length scales.
Institute of Scientific and Technical Information of China (English)
Se-Hwan; Park; Byong-Kook; Gu; Joong-Ho; Shin; Geun-Jong; Yoo
2002-01-01
Cylindrical Cam Mechanism which is one of the best eq uipments to accomplish an accurate motion transmission is widely used in the fie lds of industries, such as machine tool exchangers, textile machinery and automa tic transfer equipments. This paper proposes a new approach for the shape design and manufacturing of the cylindrical cam. The design approach uses the relative velocity concept and the manufacturing approach uses the inverse kinematics concept. For the shape desig n, the contact points betw...
Directory of Open Access Journals (Sweden)
Anyela Camargo
Full Text Available Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.
New Approaches to the Computer Simulation of Amorphous Alloys: A Review
Directory of Open Access Journals (Sweden)
Fernando Alvarez-Ramirez
2011-04-01
Full Text Available In this work we review our new methods to computer generate amorphous atomic topologies of several binary alloys: SiH, SiN, CN; binary systems based on group IV elements like SiC; the GeSe2 chalcogenide; aluminum-based systems: AlN and AlSi, and the CuZr amorphous alloy. We use an ab initio approach based on density functionals and computationally thermally-randomized periodically-continued cells with at least 108 atoms. The computational thermal process to generate the amorphous alloys is the undermelt-quench approach, or one of its variants, that consists in linearly heating the samples to just below their melting (or liquidus temperatures, and then linearly cooling them afterwards. These processes are carried out from initial crystalline conditions using short and long time steps. We find that a step four-times the default time step is adequate for most of the simulations. Radial distribution functions (partial and total are calculated and compared whenever possible with experimental results, and the agreement is very good. For some materials we report studies of the effect of the topological disorder on their electronic and vibrational densities of states and on their optical properties.
Kaneko, Toshihiro; Bai, Jaeil; Yasuoka, Kenji; Mitsutake, Ayori; Zeng, Xiao Cheng
2013-08-13
We devise a new computational approach to compute solid-liquid phase equilibria of confined fluids. Specifically, we extend the multibaric-multithermal ensemble method with an anisotropic pressure control to achieve the solid-liquid phase equilibrium for confined water inside slit nanopores (with slit width h ranging from 5.4 Å to 7.2 Å). A unique feature of this multibaric-multithermal ensemble is that the freezing points of confined water can be determined from the heat-capacity peaks. The new approach has been applied to compute the freezing point of two monolayer ices, namely, a high-density flat rhombic monolayer ice (HD-fRMI) and a high-density puckered rhombic monolayer ice (HD-pRMI) observed in our simulation. We find that the liquid-to-solid transition temperature (or the freezing point) of HD-pRMI is dependent on the slit width h, whereas that of HD-fRMI is nearly independent of the h.
Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P
2014-01-01
Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.
A Computational Approach for Model Update of an LS-DYNA Energy Absorbing Cell
Horta, Lucas G.; Jackson, Karen E.; Kellas, Sotiris
2008-01-01
NASA and its contractors are working on structural concepts for absorbing impact energy of aerospace vehicles. Recently, concepts in the form of multi-cell honeycomb-like structures designed to crush under load have been investigated for both space and aeronautics applications. Efforts to understand these concepts are progressing from tests of individual cells to tests of systems with hundreds of cells. Because of fabrication irregularities, geometry irregularities, and material properties uncertainties, the problem of reconciling analytical models, in particular LS-DYNA models, with experimental data is a challenge. A first look at the correlation results between single cell load/deflection data with LS-DYNA predictions showed problems which prompted additional work in this area. This paper describes a computational approach that uses analysis of variance, deterministic sampling techniques, response surface modeling, and genetic optimization to reconcile test with analysis results. Analysis of variance provides a screening technique for selection of critical parameters used when reconciling test with analysis. In this study, complete ignorance of the parameter distribution is assumed and, therefore, the value of any parameter within the range that is computed using the optimization procedure is considered to be equally likely. Mean values from tests are matched against LS-DYNA solutions by minimizing the square error using a genetic optimization. The paper presents the computational methodology along with results obtained using this approach.
Computational Approaches for Prediction of Pathogen-Host Protein-Protein Interactions
Directory of Open Access Journals (Sweden)
Esmaeil eNourani
2015-02-01
Full Text Available Infectious diseases are still among the major and prevalent health problems, mostly because of the drug resistance of novel variants of pathogens. Molecular interactions between pathogens and their hosts are the key part of the infection mechanisms. Novel antimicrobial therapeutics to fight drug resistance is only possible in case of a thorough understanding of pathogen-host interaction (PHI systems. Existing databases, which contain experimentally verified PHI data, suffer from scarcity of reported interactions due to the technically challenging and time consuming process of experiments. This has motivated many researchers to address the problem by proposing computational approaches for analysis and prediction of PHIs. The computational methods primarily utilize sequence information, protein structure and known interactions. Classic machine learning techniques are used when there are sufficient known interactions to be used as training data. On the opposite case, transfer and multi task learning methods are preferred. Here, we present an overview of these computational approaches for PHI prediction, discussing their weakness and abilities, with future directions.
Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora
2014-01-01
In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…
Computer/information security design approaches for Complex 21/Reconfiguration facilities
Energy Technology Data Exchange (ETDEWEB)
Hunteman, W.J.; Zack, N.R. [Los Alamos National Lab., NM (United States). Safeguards Systems Group; Jaeger, C.D. [Sandia National Labs., Albuquerque, NM (United States). Surety/Dismantlement Dept.
1993-12-31
Los Alamos National Laboratory and Sandia National Laboratories have been designated the technical lead laboratories to develop the design of the computer/information security, safeguards, and physical security systems for all of the DOE Complex 21/Reconfiguration facilities. All of the automated information processing systems and networks in these facilities will be required to implement the new DOE orders on computer and information security. The planned approach for a highly integrated information processing capability in each of the facilities will require careful consideration of the requirements in DOE Orders 5639.6 and 1360.2A. The various information protection requirements and user clearances within the facilities will also have a significant effect on the design of the systems and networks. Fulfilling the requirements for proper protection of the information and compliance with DOE orders will be possible because the computer and information security concerns are being incorporated in the early design activities. This paper will discuss the computer and information security issues being addressed in the integrated design effort for the tritium, uranium/lithium, plutonium, plutonium storage, and high explosive/assembly facilities.
Computer/information security design approaches for Complex 21/Reconfiguration facilities
Energy Technology Data Exchange (ETDEWEB)
Hunteman, W.J.; Zack, N.R. [Los Alamos National Lab., NM (United States); Jaeger, C.D. [Sandia National Labs., Albuquerque, NM (United States)
1993-08-01
Los Alamos National Laboratory and Sandia National Laboratories have been designated the technical lead laboratories to develop the design of the computer/information security, safeguards, and physical security systems for all of the DOE Complex 21/Reconfiguration facilities. All of the automated information processing systems and networks in these facilities will be required to implement the new DOE orders on computer and information security. The planned approach for a highly integrated information processing capability in each of the facilities will require careful consideration of the requirements in DOE Orders 5639.6 and 1360.2A. The various information protection requirements and user clearances within the facilities will also have a significant effect on the design of the systems and networks. Fulfilling the requirements for proper protection of the information and compliance with DOE orders will be possible because the computer and information security concerns are being incorporated in the early design activities. This paper will discuss the computer and information security addressed in the integrated design effort, uranium/lithium, plutonium, plutonium high explosive/assembly facilities.
Soft computing approach to 3D lung nodule segmentation in CT.
Badura, P; Pietka, E
2014-10-01
This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database.