WorldWideScience

Sample records for accurate computational approach

  1. An Efficient Approach for Fast and Accurate Voltage Stability Margin Computation in Large Power Grids

    Directory of Open Access Journals (Sweden)

    Heng-Yi Su

    2016-11-01

    Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.

  2. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy and Department of Oncology, University of Calgary and Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada)

    2012-06-15

    2% for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. Conclusions: The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm.

  3. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  4. Accurate computation of Mathieu functions

    CERN Document Server

    Bibby, Malcolm M

    2013-01-01

    This lecture presents a modern approach for the computation of Mathieu functions. These functions find application in boundary value analysis such as electromagnetic scattering from elliptic cylinders and flat strips, as well as the analogous acoustic and optical problems, and many other applications in science and engineering. The authors review the traditional approach used for these functions, show its limitations, and provide an alternative ""tuned"" approach enabling improved accuracy and convergence. The performance of this approach is investigated for a wide range of parameters and mach

  5. Accurate Assessment of Computed Order Tracking

    Directory of Open Access Journals (Sweden)

    P.N. Saavedra

    2006-01-01

    Full Text Available Spectral vibration analysis using the Fourier transform is the most common technique for evaluating the mechanical condition of machinery working in stationary regimen. However, machinery operating in transient modes, such as variable speed equipment, generates spectra with distinct frequency content at each time, and the standard approach is not directly applicable for diagnostic. The "order tracking" technique is a suitable tool for analyzing variable speed machines. We have studied the computed order tracking (COT, and a new computed procedure is proposed for solving the indeterminate results generated by the traditional method at constant speed. The effect on the accuracy of the assumptions inherent in the COT was assessed using data from various simulations. The use of these simulations allowed us to determine the effect on the overall true accuracy of the method of different user-defined factors: the signal and tachometric pulse sampling frequency, the method of amplitude interpolation, and the number of tachometric pulses per revolution. Tests on real data measured on the main transmissions of a mining shovel were carried out, and we concluded that the new method is appropriate for the condition monitoring of this type of machine.

  6. Accurate computer simulation of a drift chamber

    International Nuclear Information System (INIS)

    Killian, T.J.

    1980-01-01

    A general purpose program for drift chamber studies is described. First the capacitance matrix is calculated using a Green's function technique. The matrix is used in a linear-least-squares fit to choose optimal operating voltages. Next the electric field is computed, and given knowledge of gas parameters and magnetic field environment, a family of electron trajectories is determined. These are finally used to make drift distance vs time curves which may be used directly by a track reconstruction program. Results are compared with data obtained from the cylindrical chamber in the Axial Field Magnet experiment at the CERN ISR

  7. Accurate computer simulation of a drift chamber

    CERN Document Server

    Killian, T J

    1980-01-01

    The author describes a general purpose program for drift chamber studies. First the capacitance matrix is calculated using a Green's function technique. The matrix is used in a linear-least-squares fit to choose optimal operating voltages. Next the electric field is computed, and given knowledge of gas parameters and magnetic field environment, a family of electron trajectories is determined. These are finally used to make drift distance vs time curves which may be used directly by a track reconstruction program. The results are compared with data obtained from the cylindrical chamber in the Axial Field Magnet experiment at the CERN ISR. (1 refs).

  8. Accurate atom-mapping computation for biochemical reactions.

    Science.gov (United States)

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  9. Computing Accurate Grammatical Feedback in a Virtual Writing Conference for German-Speaking Elementary-School Children: An Approach Based on Natural Language Generation

    Science.gov (United States)

    Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine

    2009-01-01

    We built a natural language processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…

  10. A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity

    Directory of Open Access Journals (Sweden)

    C. C. Cui

    2017-01-01

    Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.

  11. Accurate phylogenetic tree reconstruction from quartets: a heuristic approach.

    Science.gov (United States)

    Reaz, Rezwana; Bayzid, Md Shamsuzzoha; Rahman, M Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A 'quartet' is an unrooted tree over 4 taxa, hence the quartet-based supertree methods combine many 4-taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets.

  12. Fast and accurate computation of projected two-point functions

    Science.gov (United States)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  13. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  14. An Accurate and Dynamic Computer Graphics Muscle Model

    Science.gov (United States)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  15. General approach for accurate resonance analysis in transformer windings

    NARCIS (Netherlands)

    Popov, M.

    2018-01-01

    In this paper, resonance effects in transformer windings are thoroughly investigated and analyzed. The resonance is determined by making use of an accurate approach based on the application of the impedance matrix of a transformer winding. The method is validated by a test coil and the numerical

  16. Towards a scalable and accurate quantum approach for describing vibrations of molecule–metal interfaces

    Directory of Open Access Journals (Sweden)

    David M. Benoit

    2011-08-01

    Full Text Available We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters.

  17. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  18. A new approach to determine accurately minority-carrier lifetime

    International Nuclear Information System (INIS)

    Idali Oumhand, M.; Mir, Y.; Zazoui, M.

    2009-01-01

    Electron or proton irradiations introduce recombination centers, which tend to affect solar cell parameters by reducing the minority-carrier lifetime (MCLT). Because this MCLT plays a fundamental role in the performance degradation of solar cells, in this work we present a new approach that allows us to get accurate values of MCLT. The relationship between MCLT in p-region and n-region both before and after irradiation has been determined by the new method. The validity and accuracy of this approach are justified by the fact that the degradation parameters that fit the experimental data are the same for both short-circuit current and the open-circuit voltages. This method is applied to the p + /n-InGaP solar cell under 1 MeV electron irradiation

  19. Fast and accurate three-dimensional point spread function computation for fluorescence microscopy.

    Science.gov (United States)

    Li, Jizhou; Xue, Feng; Blu, Thierry

    2017-06-01

    The point spread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-of-the-art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models.

  20. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    International Nuclear Information System (INIS)

    Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa

    2011-01-01

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  1. An Integrative Approach to Accurate Vehicle Logo Detection

    Directory of Open Access Journals (Sweden)

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  2. Accurate and efficient computation of synchrotron radiation functions

    International Nuclear Information System (INIS)

    MacLeod, Allan J.

    2000-01-01

    We consider the computation of three functions which appear in the theory of synchrotron radiation. These are F(x)=x∫x∞K 5/3 (y) dy))F p (x)=xK 2/3 (x) and G p (x)=x 1/3 K 1/3 (x), where K ν denotes a modified Bessel function. Chebyshev series coefficients are given which enable the functions to be computed with an accuracy of up to 15 sig. figures

  3. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  4. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  5. Accurate computation of transfer maps from magnetic field data

    International Nuclear Information System (INIS)

    Venturini, Marco; Dragt, Alex J.

    1999-01-01

    Consider an arbitrary beamline magnet. Suppose one component (for example, the radial component) of the magnetic field is known on the surface of some imaginary cylinder coaxial to and contained within the magnet aperture. This information can be obtained either by direct measurement or by computation with the aid of some 3D electromagnetic code. Alternatively, suppose that the field harmonics have been measured by using a spinning coil. We describe how this information can be used to compute the exact transfer map for the beamline element. This transfer map takes into account all effects of real beamline elements including fringe-field, pseudo-multipole, and real multipole error effects. The method we describe automatically takes into account the smoothing properties of the Laplace-Green function. Consequently, it is robust against both measurement and electromagnetic code errors. As an illustration we apply the method to the field analysis of high-gradient interaction region quadrupoles in the Large Hadron Collider (LHC)

  6. Approaches for the accurate definition of geological time boundaries

    Science.gov (United States)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  7. Simple, accurate equations for human blood O2 dissociation computations.

    Science.gov (United States)

    Severinghaus, J W

    1979-03-01

    Hill's equation can be slightly modified to fit the standard human blood O2 dissociation curve to within plus or minus 0.0055 fractional saturation (S) from O less than S less than 1. Other modifications of Hill's equation may be used to compute Po2 (Torr) from S (Eq. 2), and the temperature coefficient of Po2 (Eq. 3). Variations of the Bohr coefficient with Po2 are given by Eq. 4. S = (((Po2(3) + 150 Po2)(-1) x 23,400) + 1)(-1) (1) In Po2 = 0.385 In (S-1 - 1)(-1) + 3.32 - (72 S)(-1) - 0.17(S6) (2) DELTA In Po2/delta T = 0.058 ((0.243 X Po2/100)(3.88) + 1)(-1) + 0.013 (3) delta In Po2/delta pH = (Po2/26.6)(0.184) - 2.2 (4) Procedures are described to determine Po2 and S of blood iteratively after extraction or addition of a defined amount of O2 and to compute P50 of blood from a single sample after measuring Po2, pH, and S.

  8. An approach for the accurate measurement of social morality levels.

    Science.gov (United States)

    Liu, Haiyan; Chen, Xia; Zhang, Bo

    2013-01-01

    In the social sciences, computer-based modeling has become an increasingly important tool receiving widespread attention. However, the derivation of the quantitative relationships linking individual moral behavior and social morality levels, so as to provide a useful basis for social policy-making, remains a challenge in the scholarly literature today. A quantitative measurement of morality from the perspective of complexity science constitutes an innovative attempt. Based on the NetLogo platform, this article examines the effect of various factors on social morality levels, using agents modeling moral behavior, immoral behavior, and a range of environmental social resources. Threshold values for the various parameters are obtained through sensitivity analysis; and practical solutions are proposed for reversing declines in social morality levels. The results show that: (1) Population size may accelerate or impede the speed with which immoral behavior comes to determine the overall level of social morality, but it has no effect on the level of social morality itself; (2) The impact of rewards and punishment on social morality levels follows the "5∶1 rewards-to-punishment rule," which is to say that 5 units of rewards have the same effect as 1 unit of punishment; (3) The abundance of public resources is inversely related to the level of social morality; (4) When the cost of population mobility reaches 10% of the total energy level, immoral behavior begins to be suppressed (i.e. the 1/10 moral cost rule). The research approach and methods presented in this paper successfully address the difficulties involved in measuring social morality levels, and promise extensive application potentials.

  9. A programming approach to computability

    CERN Document Server

    Kfoury, A J; Arbib, Michael A

    1982-01-01

    Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...

  10. Computationally efficient and quantitatively accurate multiscale simulation of solid-solution strengthening by ab initio calculation

    International Nuclear Information System (INIS)

    Ma, Duancheng; Friák, Martin; Pezold, Johann von; Raabe, Dierk; Neugebauer, Jörg

    2015-01-01

    We propose an approach for the computationally efficient and quantitatively accurate prediction of solid-solution strengthening. It combines the 2-D Peierls–Nabarro model and a recently developed solid-solution strengthening model. Solid-solution strengthening is examined with Al–Mg and Al–Li as representative alloy systems, demonstrating a good agreement between theory and experiments within the temperature range in which the dislocation motion is overdamped. Through a parametric study, two guideline maps of the misfit parameters against (i) the critical resolved shear stress, τ 0 , at 0 K and (ii) the energy barrier, ΔE b , against dislocation motion in a solid solution with randomly distributed solute atoms are created. With these two guideline maps, τ 0 at finite temperatures is predicted for other Al binary systems, and compared with available experiments, achieving good agreement

  11. Development of highly accurate approximate scheme for computing the charge transfer integral

    Energy Technology Data Exchange (ETDEWEB)

    Pershin, Anton; Szalay, Péter G. [Laboratory for Theoretical Chemistry, Institute of Chemistry, Eötvös Loránd University, P.O. Box 32, H-1518 Budapest (Hungary)

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  12. A computational methodology for formulating gasoline surrogate fuels with accurate physical and chemical kinetic properties

    KAUST Repository

    Ahmed, Ahfaz

    2015-03-01

    Gasoline is the most widely used fuel for light duty automobile transportation, but its molecular complexity makes it intractable to experimentally and computationally study the fundamental combustion properties. Therefore, surrogate fuels with a simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating surrogates for FACE (fuels for advanced combustion engines) gasolines A and C by combining regression modeling with physical and chemical kinetics simulations. The computational methodology integrates simulation tools executed across different software platforms. Initially, the palette of surrogate species and carbon types for the target fuels were determined from a detailed hydrocarbon analysis (DHA). A regression algorithm implemented in MATLAB was linked to REFPROP for simulation of distillation curves and calculation of physical properties of surrogate compositions. The MATLAB code generates a surrogate composition at each iteration, which is then used to automatically generate CHEMKIN input files that are submitted to homogeneous batch reactor simulations for prediction of research octane number (RON). The regression algorithm determines the optimal surrogate composition to match the fuel properties of FACE A and C gasoline, specifically hydrogen/carbon (H/C) ratio, density, distillation characteristics, carbon types, and RON. The optimal surrogate fuel compositions obtained using the present computational approach was compared to the real fuel properties, as well as with surrogate compositions available in the literature. Experiments were conducted within a Cooperative Fuels Research (CFR) engine operating under controlled autoignition (CAI) mode to compare the formulated surrogates against the real fuels. Carbon monoxide measurements indicated that the proposed surrogates

  13. Computer architecture a quantitative approach

    CERN Document Server

    Hennessy, John L

    2019-01-01

    Computer Architecture: A Quantitative Approach, Sixth Edition has been considered essential reading by instructors, students and practitioners of computer design for over 20 years. The sixth edition of this classic textbook is fully revised with the latest developments in processor and system architecture. It now features examples from the RISC-V (RISC Five) instruction set architecture, a modern RISC instruction set developed and designed to be a free and openly adoptable standard. It also includes a new chapter on domain-specific architectures and an updated chapter on warehouse-scale computing that features the first public information on Google's newest WSC. True to its original mission of demystifying computer architecture, this edition continues the longstanding tradition of focusing on areas where the most exciting computing innovation is happening, while always keeping an emphasis on good engineering design.

  14. Computational approaches to energy materials

    CERN Document Server

    Catlow, Richard; Walsh, Aron

    2013-01-01

    The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process.   Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the

  15. Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights

    KAUST Repository

    Hale, Nicholas

    2013-03-06

    An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton\\'s root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.

  16. Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights

    KAUST Repository

    Hale, Nicholas; Townsend, Alex

    2013-01-01

    An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton's root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.

  17. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  18. USI: a fast and accurate approach for conceptual document annotation.

    Science.gov (United States)

    Fiorini, Nicolas; Ranwez, Sylvie; Montmain, Jacky; Ranwez, Vincent

    2015-03-14

    Semantic approaches such as concept-based information retrieval rely on a corpus in which resources are indexed by concepts belonging to a domain ontology. In order to keep such applications up-to-date, new entities need to be frequently annotated to enrich the corpus. However, this task is time-consuming and requires a high-level of expertise in both the domain and the related ontology. Different strategies have thus been proposed to ease this indexing process, each one taking advantage from the features of the document. In this paper we present USI (User-oriented Semantic Indexer), a fast and intuitive method for indexing tasks. We introduce a solution to suggest a conceptual annotation for new entities based on related already indexed documents. Our results, compared to those obtained by previous authors using the MeSH thesaurus and a dataset of biomedical papers, show that the method surpasses text-specific methods in terms of both quality and speed. Evaluations are done via usual metrics and semantic similarity. By only relying on neighbor documents, the User-oriented Semantic Indexer does not need a representative learning set. Yet, it provides better results than the other approaches by giving a consistent annotation scored with a global criterion - instead of one score per concept.

  19. A method for accurate computation of elastic and discrete inelastic scattering transfer matrix

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Santina, M.D.

    1986-05-01

    A method for accurate computation of elastic and discrete inelastic scattering transfer matrices is discussed. In particular, a partition scheme for the source energy range that avoids integration over intervals containing points where the integrand has discontinuous derivative is developed. Five-figure accurate numerical results are obtained for several test problems with the TRAMA program which incorporates the porposed method. A comparison with numerical results from existing processing codes is also presented. (author) [pt

  20. Defect correction and multigrid for an efficient and accurate computation of airfoil flows

    NARCIS (Netherlands)

    Koren, B.

    1988-01-01

    Results are presented for an efficient solution method for second-order accurate discretizations of the 2D steady Euler equations. The solution method is based on iterative defect correction. Several schemes are considered for the computation of the second-order defect. In each defect correction

  1. MOBILE CLOUD COMPUTING APPLIED TO HEALTHCARE APPROACH

    OpenAIRE

    Omar AlSheikSalem

    2016-01-01

    In the past few years it was clear that mobile cloud computing was established via integrating both mobile computing and cloud computing to be add in both storage space and processing speed. Integrating healthcare applications and services is one of the vast data approaches that can be adapted to mobile cloud computing. This work proposes a framework of a global healthcare computing based combining both mobile computing and cloud computing. This approach leads to integrate all of ...

  2. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  3. Computer-based personality judgments are more accurate than those made by humans

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-01

    Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  4. Computer-based personality judgments are more accurate than those made by humans.

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

  5. Accurate measurement of surface areas of anatomical structures by computer-assisted triangulation of computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Allardice, J.T.; Jacomb-Hood, J.; Abulafi, A.M.; Williams, N.S. (Royal London Hospital (United Kingdom)); Cookson, J.; Dykes, E.; Holman, J. (London Hospital Medical College (United Kingdom))

    1993-05-01

    There is a need for accurate surface area measurement of internal anatomical structures in order to define light dosimetry in adjunctive intraoperative photodynamic therapy (AIOPDT). The authors investigated whether computer-assisted triangulation of serial sections generated by computed tomography (CT) scanning can give an accurate assessment of the surface area of the walls of the true pelvis after anterior resection and before colorectal anastomosis. They show that the technique of paper density tessellation is an acceptable method of measuring the surface areas of phantom objects, with a maximum error of 0.5%, and is used as the gold standard. Computer-assisted triangulation of CT images of standard geometric objects and accurately-constructed pelvic phantoms gives a surface area assessment with a maximum error of 2.5% compared with the gold standard. The CT images of 20 patients' pelves have been analysed by computer-assisted triangulation and this shows the surface area of the walls varies from 143 cm[sup 2] to 392 cm[sup 2]. (Author).

  6. Accurate computations of monthly average daily extraterrestrial irradiation and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1985-12-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers

  7. Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)

    2017-07-15

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  8. Computational approach to Riemann surfaces

    CERN Document Server

    Klein, Christian

    2011-01-01

    This volume offers a well-structured overview of existent computational approaches to Riemann surfaces and those currently in development. The authors of the contributions represent the groups providing publically available numerical codes in this field. Thus this volume illustrates which software tools are available and how they can be used in practice. In addition examples for solutions to partial differential equations and in surface theory are presented. The intended audience of this book is twofold. It can be used as a textbook for a graduate course in numerics of Riemann surfaces, in which case the standard undergraduate background, i.e., calculus and linear algebra, is required. In particular, no knowledge of the theory of Riemann surfaces is expected; the necessary background in this theory is contained in the Introduction chapter. At the same time, this book is also intended for specialists in geometry and mathematical physics applying the theory of Riemann surfaces in their research. It is the first...

  9. Fuzzy multiple linear regression: A computational approach

    Science.gov (United States)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  10. Efficient reconfigurable hardware architecture for accurately computing success probability and data complexity of linear attacks

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar

    2012-01-01

    An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...

  11. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2011-01-01

    The computing world today is in the middle of a revolution: mobile clients and cloud computing have emerged as the dominant paradigms driving programming and hardware innovation today. The Fifth Edition of Computer Architecture focuses on this dramatic shift, exploring the ways in which software and technology in the cloud are accessed by cell phones, tablets, laptops, and other mobile computing devices. Each chapter includes two real-world examples, one mobile and one datacenter, to illustrate this revolutionary change.Updated to cover the mobile computing revolutionEmphasizes the two most im

  12. Assessing smoking status in disadvantaged populations: is computer administered self report an accurate and acceptable measure?

    Directory of Open Access Journals (Sweden)

    Bryant Jamie

    2011-11-01

    Full Text Available Abstract Background Self report of smoking status is potentially unreliable in certain situations and in high-risk populations. This study aimed to determine the accuracy and acceptability of computer administered self-report of smoking status among a low socioeconomic (SES population. Methods Clients attending a community service organisation for welfare support were invited to complete a cross-sectional touch screen computer health survey. Following survey completion, participants were invited to provide a breath sample to measure exposure to tobacco smoke in expired air. Sensitivity, specificity, positive predictive value and negative predictive value were calculated. Results Three hundred and eighty three participants completed the health survey, and 330 (86% provided a breath sample. Of participants included in the validation analysis, 59% reported being a daily or occasional smoker. Sensitivity was 94.4% and specificity 92.8%. The positive and negative predictive values were 94.9% and 92.0% respectively. The majority of participants reported that the touch screen survey was both enjoyable (79% and easy (88% to complete. Conclusions Computer administered self report is both acceptable and accurate as a method of assessing smoking status among low SES smokers in a community setting. Routine collection of health information using touch-screen computer has the potential to identify smokers and increase provision of support and referral in the community setting.

  13. Fast and accurate algorithm for the computation of complex linear canonical transforms.

    Science.gov (United States)

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-09-01

    A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in approximately N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant.

  14. Accurate technique for complete geometric calibration of cone-beam computed tomography systems

    International Nuclear Information System (INIS)

    Cho Youngbin; Moseley, Douglas J.; Siewerdsen, Jeffrey H.; Jaffray, David A.

    2005-01-01

    Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 deg. (around beam direction) to 0.3 deg. (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0.1 mm in

  15. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    Science.gov (United States)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  16. An Accurate Method for Computing the Absorption of Solar Radiation by Water Vapor

    Science.gov (United States)

    Chou, M. D.

    1980-01-01

    The method is based upon molecular line parameters and makes use of a far wing scaling approximation and k distribution approach previously applied to the computation of the infrared cooling rate due to water vapor. Taking into account the wave number dependence of the incident solar flux, the solar heating rate is computed for the entire water vapor spectrum and for individual absorption bands. The accuracy of the method is tested against line by line calculations. The method introduces a maximum error of 0.06 C/day. The method has the additional advantage over previous methods in that it can be applied to any portion of the spectral region containing the water vapor bands. The integrated absorptances and line intensities computed from the molecular line parameters were compared with laboratory measurements. The comparison reveals that, among the three different sources, absorptance is the largest for the laboratory measurements.

  17. What is computation : An epistemic approach

    NARCIS (Netherlands)

    Wiedermann, Jiří; van Leeuwen, Jan

    2015-01-01

    Traditionally, computations are seen as processes that transform information. Definitions of computation subsequently concentrate on a description of the mechanisms that lead to such processes. The bottleneck of this approach is twofold. First, it leads to a definition of computation that is too

  18. Simple but accurate GCM-free approach for quantifying anthropogenic climate change

    Science.gov (United States)

    Lovejoy, S.

    2014-12-01

    We are so used to analysing the climate with the help of giant computer models (GCM's) that it is easy to get the impression that they are indispensable. Yet anthropogenic warming is so large (roughly 0.9oC) that it turns out that it is straightforward to quantify it with more empirically based methodologies that can be readily understood by the layperson. The key is to use the CO2 forcing as a linear surrogate for all the anthropogenic effects from 1880 to the present (implicitly including all effects due to Greenhouse Gases, aerosols and land use changes). To a good approximation, double the economic activity, double the effects. The relationship between the forcing and global mean temperature is extremely linear as can be seen graphically and understood without fancy statistics, [Lovejoy, 2014a] (see the attached figure and http://www.physics.mcgill.ca/~gang/Lovejoy.htm). To an excellent approximation, the deviations from the linear forcing - temperature relation can be interpreted as the natural variability. For example, this direct - yet accurate approach makes it graphically obvious that the "pause" or "hiatus" in the warming since 1998 is simply a natural cooling event that has roughly offset the anthropogenic warming [Lovejoy, 2014b]. Rather than trying to prove that the warming is anthropogenic, with a little extra work (and some nonlinear geophysics theory and pre-industrial multiproxies) we can disprove the competing theory that it is natural. This approach leads to the estimate that the probability of the industrial scale warming being a giant natural fluctuation is ≈0.1%: it can be dismissed. This destroys the last climate skeptic argument - that the models are wrong and the warming is natural. It finally allows for a closure of the debate. In this talk we argue that this new, direct, simple, intuitive approach provides an indispensable tool for communicating - and convincing - the public of both the reality and the amplitude of anthropogenic warming

  19. Integrative approaches to computational biomedicine

    Science.gov (United States)

    Coveney, Peter V.; Diaz-Zuccarini, Vanessa; Graf, Norbert; Hunter, Peter; Kohl, Peter; Tegner, Jesper; Viceconti, Marco

    2013-01-01

    The new discipline of computational biomedicine is concerned with the application of computer-based techniques and particularly modelling and simulation to human health. Since 2007, this discipline has been synonymous, in Europe, with the name given to the European Union's ambitious investment in integrating these techniques with the eventual aim of modelling the human body as a whole: the virtual physiological human. This programme and its successors are expected, over the next decades, to transform the study and practice of healthcare, moving it towards the priorities known as ‘4P's’: predictive, preventative, personalized and participatory medicine.

  20. Infinitesimal symmetries: a computational approach

    International Nuclear Information System (INIS)

    Kersten, P.H.M.

    1985-01-01

    This thesis is concerned with computational aspects in the determination of infinitesimal symmetries and Lie-Baecklund transformations of differential equations. Moreover some problems are calculated explicitly. A brief introduction to some concepts in the theory of symmetries and Lie-Baecklund transformations, relevant for this thesis, are given. The mathematical formalism is shortly reviewed. The jet bundle formulation is chosen, in which, by its algebraic nature, objects can be described very precisely. Consequently it is appropriate for implementation. A number of procedures are discussed, which enable to carry through computations with the help of a computer. These computations are very extensive in practice. The Lie algebras of infinitesimal symmetries of a number of differential equations in Mathematical Physics are established and some of their applications are discussed, i.e., Maxwell equations, nonlinear diffusion equation, nonlinear Schroedinger equation, nonlinear Dirac equations and self dual SU(2) Yang-Mills equations. Lie-Baecklund transformations of Burgers' equation, Classical Boussinesq equation and the Massive Thirring Model are determined. Furthermore, nonlocal Lie-Baecklund transformations of the last equation are derived. (orig.)

  1. Computational approach in zeolite science

    NARCIS (Netherlands)

    Pidko, E.A.; Santen, van R.A.; Chester, A.W.; Derouane, E.G.

    2009-01-01

    This chapter presents an overview of different computational methods and their application to various fields of zeolite chemistry. We will discuss static lattice methods based on interatomic potentials to predict zeolite structures and topologies, Monte Carlo simulations for the investigation of

  2. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion.

    Science.gov (United States)

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan

    2013-09-01

    Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can

  3. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion

    International Nuclear Information System (INIS)

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao

    2013-01-01

    Background: Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. Purpose: To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. Material and Methods: The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. Results: After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 X d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l X h X d): V = 0.56 X (l X h X d) + 39.44 (r = 0.92, P = 0

  4. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Zhi-Jun [Dept. of Radiology, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China)], e-mail: Gzj3@163.com; Lin, Qiang [Dept. of Oncology, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China); Liu, Hai-Tao [Dept. of General Surgery, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China)] [and others])

    2013-09-15

    Background: Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. Purpose: To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. Material and Methods: The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. Results: After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 X d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l X h X d): V = 0.56 X (l X h X d) + 39.44 (r = 0.92, P = 0

  5. Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number

    Directory of Open Access Journals (Sweden)

    Wang Xingyuan

    2010-01-01

    Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.

  6. Accurate prediction of stability changes in protein mutants by combining machine learning with structure based computational mutagenesis.

    Science.gov (United States)

    Masso, Majid; Vaisman, Iosif I

    2008-09-15

    Accurate predictive models for the impact of single amino acid substitutions on protein stability provide insight into protein structure and function. Such models are also valuable for the design and engineering of new proteins. Previously described methods have utilized properties of protein sequence or structure to predict the free energy change of mutants due to thermal (DeltaDeltaG) and denaturant (DeltaDeltaG(H2O)) denaturations, as well as mutant thermal stability (DeltaT(m)), through the application of either computational energy-based approaches or machine learning techniques. However, accuracy associated with applying these methods separately is frequently far from optimal. We detail a computational mutagenesis technique based on a four-body, knowledge-based, statistical contact potential. For any mutation due to a single amino acid replacement in a protein, the method provides an empirical normalized measure of the ensuing environmental perturbation occurring at every residue position. A feature vector is generated for the mutant by considering perturbations at the mutated position and it's ordered six nearest neighbors in the 3-dimensional (3D) protein structure. These predictors of stability change are evaluated by applying machine learning tools to large training sets of mutants derived from diverse proteins that have been experimentally studied and described. Predictive models based on our combined approach are either comparable to, or in many cases significantly outperform, previously published results. A web server with supporting documentation is available at http://proteins.gmu.edu/automute.

  7. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2007-01-01

    The era of seemingly unlimited growth in processor performance is over: single chip architectures can no longer overcome the performance limitations imposed by the power they consume and the heat they generate. Today, Intel and other semiconductor firms are abandoning the single fast processor model in favor of multi-core microprocessors--chips that combine two or more processors in a single package. In the fourth edition of Computer Architecture, the authors focus on this historic shift, increasing their coverage of multiprocessors and exploring the most effective ways of achieving parallelis

  8. Accurate Simulation of Parametrically Excited Micromirrors via Direct Computation of the Electrostatic Stiffness.

    Science.gov (United States)

    Frangi, Attilio; Guerrieri, Andrea; Boni, Nicoló

    2017-04-06

    Electrostatically actuated torsional micromirrors are key elements in Micro-Opto-Electro- Mechanical-Systems. When forced by means of in-plane comb-fingers, the dynamics of the main torsional response is known to be strongly non-linear and governed by parametric resonance. Here, in order to also trace unstable branches of the mirror response, we implement a simplified continuation method with arc-length control and propose an innovative technique based on Finite Elements and the concepts of material derivative in order to compute the electrostatic stiffness; i.e., the derivative of the torque with respect to the torsional angle, as required by the continuation approach.

  9. Accurate Computation of Reduction Potentials of 4Fe−4S Clusters Indicates a Carboxylate Shift in Pyrococcus furiosus Ferredoxin

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta; Ooi, Bee Lean; Christensen, Hans Erik Mølager

    2007-01-01

    This work describes the computation and accurate reproduction of subtle shifts in reduction potentials for two mutants of the iron-sulfur protein Pyrococcus furiosus ferredoxin. The computational models involved only first-sphere ligands and differed with respect to one ligand, either acetate (as...

  10. Learning and geometry computational approaches

    CERN Document Server

    Smith, Carl

    1996-01-01

    The field of computational learning theory arose out of the desire to for­ mally understand the process of learning. As potential applications to artificial intelligence became apparent, the new field grew rapidly. The learning of geo­ metric objects became a natural area of study. The possibility of using learning techniques to compensate for unsolvability provided an attraction for individ­ uals with an immediate need to solve such difficult problems. Researchers at the Center for Night Vision were interested in solving the problem of interpreting data produced by a variety of sensors. Current vision techniques, which have a strong geometric component, can be used to extract features. However, these techniques fall short of useful recognition of the sensed objects. One potential solution is to incorporate learning techniques into the geometric manipulation of sensor data. As a first step toward realizing such a solution, the Systems Research Center at the University of Maryland, in conjunction with the C...

  11. Quantum Computing: a Quantum Group Approach

    OpenAIRE

    Wang, Zhenghan

    2013-01-01

    There is compelling theoretical evidence that quantum physics will change the face of information science. Exciting progress has been made during the last two decades towards the building of a large scale quantum computer. A quantum group approach stands out as a promising route to this holy grail, and provides hope that we may have quantum computers in our future.

  12. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  13. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  14. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    International Nuclear Information System (INIS)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei

    2015-01-01

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm

  15. Cognitive Approaches for Medicine in Cloud Computing.

    Science.gov (United States)

    Ogiela, Urszula; Takizawa, Makoto; Ogiela, Lidia

    2018-03-03

    This paper will present the application potential of the cognitive approach to data interpretation, with special reference to medical areas. The possibilities of using the meaning approach to data description and analysis will be proposed for data analysis tasks in Cloud Computing. The methods of cognitive data management in Cloud Computing are aimed to support the processes of protecting data against unauthorised takeover and they serve to enhance the data management processes. The accomplishment of the proposed tasks will be the definition of algorithms for the execution of meaning data interpretation processes in safe Cloud Computing. • We proposed a cognitive methods for data description. • Proposed a techniques for secure data in Cloud Computing. • Application of cognitive approaches for medicine was described.

  16. RIO: a new computational framework for accurate initial data of binary black holes

    Science.gov (United States)

    Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2018-06-01

    We present a computational framework ( Rio) in the ADM 3+1 approach for numerical relativity. This work enables us to carry out high resolution calculations for initial data of two arbitrary black holes. We use the transverse conformal treatment, the Bowen-York and the puncture methods. For the numerical solution of the Hamiltonian constraint we use the domain decomposition and the spectral decomposition of Galerkin-Collocation. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show the convergence of the Rio code. This code allows for easy deployment of large calculations. We show how the spin of one of the black holes is manifest in the conformal factor.

  17. Accurate Simulation of Parametrically Excited Micromirrors via Direct Computation of the Electrostatic Stiffness

    Directory of Open Access Journals (Sweden)

    Attilio Frangi

    2017-04-01

    Full Text Available Electrostatically actuated torsional micromirrors are key elements in Micro-Opto-Electro- Mechanical-Systems. When forced by means of in-plane comb-fingers, the dynamics of the main torsional response is known to be strongly non-linear and governed by parametric resonance. Here, in order to also trace unstable branches of the mirror response, we implement a simplified continuation method with arc-length control and propose an innovative technique based on Finite Elements and the concepts of material derivative in order to compute the electrostatic stiffness; i.e., the derivative of the torque with respect to the torsional angle, as required by the continuation approach.

  18. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    Science.gov (United States)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  19. Absolute Hounsfield unit measurement on noncontrast computed tomography cannot accurately predict struvite stone composition.

    Science.gov (United States)

    Marchini, Giovanni Scala; Gebreselassie, Surafel; Liu, Xiaobo; Pynadath, Cindy; Snyder, Grace; Monga, Manoj

    2013-02-01

    The purpose of our study was to determine, in vivo, whether single-energy noncontrast computed tomography (NCCT) can accurately predict the presence/percentage of struvite stone composition. We retrospectively searched for all patients with struvite components on stone composition analysis between January 2008 and March 2012. Inclusion criteria were NCCT prior to stone analysis and stone size ≥4 mm. A single urologist, blinded to stone composition, reviewed all NCCT to acquire stone location, dimensions, and Hounsfield unit (HU). HU density (HUD) was calculated by dividing mean HU by the stone's largest transverse diameter. Stone analysis was performed via Fourier transform infrared spectrometry. Independent sample Student's t-test and analysis of variance (ANOVA) were used to compare HU/HUD among groups. Spearman's correlation test was used to determine the correlation between HU and stone size and also HU/HUD to % of each component within the stone. Significance was considered if pR=0.017; p=0.912) and negative with HUD (R=-0.20; p=0.898). Overall, 3 (6.8%) had stones (n=5) with other miscellaneous stones (n=39), no difference was found for HU (p=0.09) but HUD was significantly lower for pure stones (27.9±23.6 v 72.5±55.9, respectively; p=0.006). Again, significant overlaps were seen. Pure struvite stones have significantly lower HUD than mixed struvite stones, but overlap exists. A low HUD may increase the suspicion for a pure struvite calculus.

  20. Late enhanced computed tomography in Hypertrophic Cardiomyopathy enables accurate left-ventricular volumetry

    Energy Technology Data Exchange (ETDEWEB)

    Langer, Christoph; Lutz, M.; Kuehl, C.; Frey, N. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany); Partner Site Hamburg/Kiel/Luebeck, DZHK (German Centre for Cardiovascular Research), Kiel (Germany); Both, M.; Sattler, B.; Jansen, O; Schaefer, P. [Christian-Albrechts-Universitaet Kiel, Department of Diagnostic Radiology, University Medical Center Schleswig-Holstein (Germany); Harders, H.; Eden, M. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany)

    2014-10-15

    Late enhancement (LE) multi-slice computed tomography (leMDCT) was introduced for the visualization of (intra-) myocardial fibrosis in Hypertrophic Cardiomyopathy (HCM). LE is associated with adverse cardiac events. This analysis focuses on leMDCT derived LV muscle mass (LV-MM) which may be related to LE resulting in LE proportion for potential risk stratification in HCM. N=26 HCM-patients underwent leMDCT (64-slice-CT) and cardiovascular magnetic resonance (CMR). In leMDCT iodine contrast (Iopromid, 350 mg/mL; 150mL) was injected 7 minutes before imaging. Reconstructed short cardiac axis views served for planimetry. The study group was divided into three groups of varying LV-contrast. LeMDCT was correlated with CMR. The mean age was 64.2 ± 14 years. The groups of varying contrast differed in weight and body mass index (p < 0.05). In the group with good LV-contrast assessment of LV-MM resulted in 147.4 ± 64.8 g in leMDCT vs. 147.1 ± 65.9 in CMR (p > 0.05). In the group with sufficient contrast LV-MM appeared with 172 ± 30.8 g in leMDCT vs. 165.9 ± 37.8 in CMR (p > 0.05). Overall intra-/inter-observer variability of semiautomatic assessment of LV-MM showed an accuracy of 0.9 ± 8.6 g and 0.8 ± 9.2 g in leMDCT. All leMDCT-measures correlated well with CMR (r > 0.9). LeMDCT primarily performed for LE-visualization in HCM allows for accurate LV-volumetry including LV-MM in > 90 % of the cases. (orig.)

  1. Toward exascale computing through neuromorphic approaches.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  2. A novel fast and accurate pseudo-analytical simulation approach for MOAO

    KAUST Repository

    Gendron, É .; Charara, Ali; Abdelfattah, Ahmad; Gratadour, D.; Keyes, David E.; Ltaief, Hatem; Morel, C.; Vidal, F.; Sevin, A.; Rousset, G.

    2014-01-01

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is

  3. A novel fast and accurate pseudo-analytical simulation approach for MOAO

    KAUST Repository

    Gendron, É.

    2014-08-04

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is

  4. Molecular acidity: An accurate description with information-theoretic approach in density functional reactivity theory.

    Science.gov (United States)

    Cao, Xiaofang; Rong, Chunying; Zhong, Aiguo; Lu, Tian; Liu, Shubin

    2018-01-15

    Molecular acidity is one of the important physiochemical properties of a molecular system, yet its accurate calculation and prediction are still an unresolved problem in the literature. In this work, we propose to make use of the quantities from the information-theoretic (IT) approach in density functional reactivity theory and provide an accurate description of molecular acidity from a completely new perspective. To illustrate our point, five different categories of acidic series, singly and doubly substituted benzoic acids, singly substituted benzenesulfinic acids, benzeneseleninic acids, phenols, and alkyl carboxylic acids, have been thoroughly examined. We show that using IT quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Rényi entropy, one is able to simultaneously predict experimental pKa values of these different categories of compounds. Because of the universality of the quantities employed in this work, which are all density dependent, our approach should be general and be applicable to other systems as well. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Microarray-based cancer prediction using soft computing approach.

    Science.gov (United States)

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  6. Computational chemical imaging for cardiovascular pathology: chemical microscopic imaging accurately determines cardiac transplant rejection.

    Directory of Open Access Journals (Sweden)

    Saumya Tiwari

    Full Text Available Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients' biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures.

  7. A Highly Accurate and Efficient Analytical Approach to Bridge Deck Free Vibration Analysis

    Directory of Open Access Journals (Sweden)

    D.J. Gorman

    2000-01-01

    Full Text Available The superposition method is employed to obtain an accurate analytical type solution for the free vibration frequencies and mode shapes of multi-span bridge decks. Free edge conditions are imposed on the long edges running in the direction of the deck. Inter-span support is of the simple (knife-edge type. The analysis is valid regardless of the number of spans or their individual lengths. Exact agreement is found when computed results are compared with known eigenvalues for bridge decks with all spans of equal length. Mode shapes and eigenvalues are presented for typical bridge decks of three and four span lengths. In each case torsional and non-torsional modes are studied.

  8. Computational fluid dynamics a practical approach

    CERN Document Server

    Tu, Jiyuan; Liu, Chaoqun

    2018-01-01

    Computational Fluid Dynamics: A Practical Approach, Third Edition, is an introduction to CFD fundamentals and commercial CFD software to solve engineering problems. The book is designed for a wide variety of engineering students new to CFD, and for practicing engineers learning CFD for the first time. Combining an appropriate level of mathematical background, worked examples, computer screen shots, and step-by-step processes, this book walks the reader through modeling and computing, as well as interpreting CFD results. This new edition has been updated throughout, with new content and improved figures, examples and problems.

  9. Computational neuropharmacology: dynamical approaches in drug discovery.

    Science.gov (United States)

    Aradi, Ildiko; Erdi, Péter

    2006-05-01

    Computational approaches that adopt dynamical models are widely accepted in basic and clinical neuroscience research as indispensable tools with which to understand normal and pathological neuronal mechanisms. Although computer-aided techniques have been used in pharmaceutical research (e.g. in structure- and ligand-based drug design), the power of dynamical models has not yet been exploited in drug discovery. We suggest that dynamical system theory and computational neuroscience--integrated with well-established, conventional molecular and electrophysiological methods--offer a broad perspective in drug discovery and in the search for novel targets and strategies for the treatment of neurological and psychiatric diseases.

  10. A computational methodology for formulating gasoline surrogate fuels with accurate physical and chemical kinetic properties

    KAUST Repository

    Ahmed, Ahfaz; Goteng, Gokop; Shankar, Vijai; Al-Qurashi, Khalid; Roberts, William L.; Sarathy, Mani

    2015-01-01

    simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating

  11. Efficient and Accurate Computational Framework for Injector Design and Analysis, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...

  12. Effective and accurate approach for modeling of commensurate-incommensurate transition in krypton monolayer on graphite.

    Science.gov (United States)

    Ustinov, E A

    2014-10-07

    Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.

  13. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    International Nuclear Information System (INIS)

    Ustinov, E. A.

    2014-01-01

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system

  14. Computer networking a top-down approach

    CERN Document Server

    Kurose, James

    2017-01-01

    Unique among computer networking texts, the Seventh Edition of the popular Computer Networking: A Top Down Approach builds on the author’s long tradition of teaching this complex subject through a layered approach in a “top-down manner.” The text works its way from the application layer down toward the physical layer, motivating readers by exposing them to important concepts early in their study of networking. Focusing on the Internet and the fundamentally important issues of networking, this text provides an excellent foundation for readers interested in computer science and electrical engineering, without requiring extensive knowledge of programming or mathematics. The Seventh Edition has been updated to reflect the most important and exciting recent advances in networking.

  15. Hybrid soft computing approaches research and applications

    CERN Document Server

    Dutta, Paramartha; Chakraborty, Susanta

    2016-01-01

    The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis,  (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.

  16. How accurate are adolescents in portion-size estimation using the computer tool young adolescents' nutrition assessment on computer (YANA-C)?

    OpenAIRE

    Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea

    2010-01-01

    Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amou...

  17. Accurate calculation of conformational free energy differences in explicit water: the confinement-solvation free energy approach.

    Science.gov (United States)

    Esque, Jeremy; Cecchini, Marco

    2015-04-23

    The calculation of the free energy of conformation is key to understanding the function of biomolecules and has attracted significant interest in recent years. Here, we present an improvement of the confinement method that was designed for use in the context of explicit solvent MD simulations. The development involves an additional step in which the solvation free energy of the harmonically restrained conformers is accurately determined by multistage free energy perturbation simulations. As a test-case application, the newly introduced confinement/solvation free energy (CSF) approach was used to compute differences in free energy between conformers of the alanine dipeptide in explicit water. The results are in excellent agreement with reference calculations based on both converged molecular dynamics and umbrella sampling. To illustrate the general applicability of the method, conformational equilibria of met-enkephalin (5 aa) and deca-alanine (10 aa) in solution were also analyzed. In both cases, smoothly converged free-energy results were obtained in agreement with equilibrium sampling or literature calculations. These results demonstrate that the CSF method may provide conformational free-energy differences of biomolecules with small statistical errors (below 0.5 kcal/mol) and at a moderate computational cost even with a full representation of the solvent.

  18. A Trace Data-Based Approach for an Accurate Estimation of Precise Utilization Maps in LTE

    Directory of Open Access Journals (Sweden)

    Almudena Sánchez

    2017-01-01

    Full Text Available For network planning and optimization purposes, mobile operators make use of Key Performance Indicators (KPIs, computed from Performance Measurements (PMs, to determine whether network performance needs to be improved. In current networks, PMs, and therefore KPIs, suffer from lack of precision due to an insufficient temporal and/or spatial granularity. In this work, an automatic method, based on data traces, is proposed to improve the accuracy of radio network utilization measurements collected in a Long-Term Evolution (LTE network. The method’s output is an accurate estimate of the spatial and temporal distribution for the cell utilization ratio that can be extended to other indicators. The method can be used to improve automatic network planning and optimization algorithms in a centralized Self-Organizing Network (SON entity, since potential issues can be more precisely detected and located inside a cell thanks to temporal and spatial precision. The proposed method is tested with real connection traces gathered in a large geographical area of a live LTE network and considers overload problems due to trace file size limitations, which is a key consideration when analysing a large network. Results show how these distributions provide a very detailed information of network utilization, compared to cell based statistics.

  19. Variational mode decomposition based approach for accurate classification of color fundus images with hemorrhages

    Science.gov (United States)

    Lahmiri, Salim; Shmuel, Amir

    2017-11-01

    Diabetic retinopathy is a disease that can cause a loss of vision. An early and accurate diagnosis helps to improve treatment of the disease and prognosis. One of the earliest characteristics of diabetic retinopathy is the appearance of retinal hemorrhages. The purpose of this study is to design a fully automated system for the detection of hemorrhages in a retinal image. In the first stage of our proposed system, a retinal image is processed with variational mode decomposition (VMD) to obtain the first variational mode, which captures the high frequency components of the original image. In the second stage, four texture descriptors are extracted from the first variational mode. Finally, a classifier trained with all computed texture descriptors is used to distinguish between images of healthy and unhealthy retinas with hemorrhages. Experimental results showed evidence of the effectiveness of the proposed system for detection of hemorrhages in the retina, since a perfect detection rate was achieved. Our proposed system for detecting diabetic retinopathy is simple and easy to implement. It requires only short processing time, and it yields higher accuracy in comparison with previously proposed methods for detecting diabetic retinopathy.

  20. Integrated Translatome and Proteome: Approach for Accurate Portraying of Widespread Multifunctional Aspects of Trichoderma

    Science.gov (United States)

    Sharma, Vivek; Salwan, Richa; Sharma, P. N.; Gulati, Arvind

    2017-01-01

    Genome-wide studies of transcripts expression help in systematic monitoring of genes and allow targeting of candidate genes for future research. In contrast to relatively stable genomic data, the expression of genes is dynamic and regulated both at time and space level at different level in. The variation in the rate of translation is specific for each protein. Both the inherent nature of an mRNA molecule to be translated and the external environmental stimuli can affect the efficiency of the translation process. In biocontrol agents (BCAs), the molecular response at translational level may represents noise-like response of absolute transcript level and an adaptive response to physiological and pathological situations representing subset of mRNAs population actively translated in a cell. The molecular responses of biocontrol are complex and involve multistage regulation of number of genes. The use of high-throughput techniques has led to rapid increase in volume of transcriptomics data of Trichoderma. In general, almost half of the variations of transcriptome and protein level are due to translational control. Thus, studies are required to integrate raw information from different “omics” approaches for accurate depiction of translational response of BCAs in interaction with plants and plant pathogens. The studies on translational status of only active mRNAs bridging with proteome data will help in accurate characterization of only a subset of mRNAs actively engaged in translation. This review highlights the associated bottlenecks and use of state-of-the-art procedures in addressing the gap to accelerate future accomplishment of biocontrol mechanisms. PMID:28900417

  1. Sonographically guided posteromedial approach for intra-articular knee injections: a safe, accurate, and efficient method.

    Science.gov (United States)

    Tresley, Jonathan; Jose, Jean

    2015-04-01

    Osteoarthritis of the knee can be a debilitating and extremely painful condition. In patients who desire to postpone knee arthroplasty or in those who are not surgical candidates, percutaneous knee injection therapies have the potential to reduce pain and swelling, maintain joint mobility, and minimize disability. Published studies cite poor accuracy of intra-articular knee joint injections without imaging guidance. We present a sonographically guided posteromedial approach to intra-articular knee joint injections with 100% accuracy and no complications in a consecutive series of 67 patients undergoing subsequent computed tomographic or magnetic resonance arthrography. Although many other standard approaches are available, a posteromedial intra-articular technique is particularly useful in patients with a large body habitus and theoretically allows for simultaneous aspiration of Baker cysts with a single sterile preparation and without changing the patient's position. The posteromedial technique described in this paper is not compared or deemed superior to other standard approaches but, rather, is presented as a potentially safe and efficient alternative. © 2015 by the American Institute of Ultrasound in Medicine.

  2. An efficient and accurate method for computation of energy release rates in beam structures with longitudinal cracks

    DEFF Research Database (Denmark)

    Blasques, José Pedro Albergaria Amaral; Bitsche, Robert

    2015-01-01

    This paper proposes a novel, efficient, and accurate framework for fracture analysis of beam structures with longitudinal cracks. The three-dimensional local stress field is determined using a high-fidelity beam model incorporating a finite element based cross section analysis tool. The Virtual...... Crack Closure Technique is used for computation of strain energy release rates. The devised framework was employed for analysis of cracks in beams with different cross section geometries. The results show that the accuracy of the proposed method is comparable to that of conventional three......-dimensional solid finite element models while using only a fraction of the computation time....

  3. Advanced computational approaches to biomedical engineering

    CERN Document Server

    Saha, Punam K; Basu, Subhadip

    2014-01-01

    There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig

  4. Computational Approaches to Nucleic Acid Origami.

    Science.gov (United States)

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.

  5. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    International Nuclear Information System (INIS)

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-01-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm

  6. Accurate Computed Enthalpies of Spin Crossover in Iron and Cobalt Complexes

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta; Cirera, J

    2009-01-01

    Despite their importance in many chemical processes, the relative energies of spin states of transition metal complexes have so far been haunted by large computational errors. By the use of six functionals, B3LYP, BP86, TPSS, TPSSh, M06L, and M06L, this work studies nine complexes (seven with iron...

  7. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  8. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    Science.gov (United States)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  9. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    Science.gov (United States)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  10. An accurate and computationally efficient small-scale nonlinear FEA of flexible risers

    OpenAIRE

    Rahmati, MT; Bahai, H; Alfano, G

    2016-01-01

    This paper presents a highly efficient small-scale, detailed finite-element modelling method for flexible risers which can be effectively implemented in a fully-nested (FE2) multiscale analysis based on computational homogenisation. By exploiting cyclic symmetry and applying periodic boundary conditions, only a small fraction of a flexible pipe is used for a detailed nonlinear finite-element analysis at the small scale. In this model, using three-dimensional elements, all layer components are...

  11. Introducing Computational Approaches in Intermediate Mechanics

    Science.gov (United States)

    Cook, David M.

    2006-12-01

    In the winter of 2003, we at Lawrence University moved Lagrangian mechanics and rigid body dynamics from a required sophomore course to an elective junior/senior course, freeing 40% of the time for computational approaches to ordinary differential equations (trajectory problems, the large amplitude pendulum, non-linear dynamics); evaluation of integrals (finding centers of mass and moment of inertia tensors, calculating gravitational potentials for various sources); and finding eigenvalues and eigenvectors of matrices (diagonalizing the moment of inertia tensor, finding principal axes), and to generating graphical displays of computed results. Further, students begin to use LaTeX to prepare some of their submitted problem solutions. Placed in the middle of the sophomore year, this course provides the background that permits faculty members as appropriate to assign computer-based exercises in subsequent courses. Further, students are encouraged to use our Computational Physics Laboratory on their own initiative whenever that use seems appropriate. (Curricular development supported in part by the W. M. Keck Foundation, the National Science Foundation, and Lawrence University.)

  12. Interacting electrons theory and computational approaches

    CERN Document Server

    Martin, Richard M; Ceperley, David M

    2016-01-01

    Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. Many-body methods are becoming essential tools vital for quantitative calculations and understanding materials phenomena in physics, chemistry, materials science and other fields. This book provides a unified exposition of the most-used tools: many-body perturbation theory, dynamical mean field theory and quantum Monte Carlo simulations. Each topic is introduced with a less technical overview for a broad readership, followed by in-depth descriptions and mathematical formulation. Practical guidelines, illustrations and exercises are chosen to enable readers to appreciate the complementary approaches, their relationships, and the advantages and disadvantages of each method. This book is designed for graduate students and researchers who want to use and understand these advanced computational tools, get a broad overview, and acquire a basis for participating in new developments.

  13. Computational approaches to analogical reasoning current trends

    CERN Document Server

    Richard, Gilles

    2014-01-01

    Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...

  14. Accurate determination of high-risk coronary lesion type by multidetector cardiac computed tomography.

    Science.gov (United States)

    Alasnag, Mirvat; Umakanthan, Branavan; Foster, Gary P

    2008-07-01

    Coronary arteriography (CA) is the standard method to image coronary lesions. Multidetector cardiac computerized tomography (MDCT) provides high-resolution images of coronary arteries, allowing a noninvasive alternative to determine lesion type. To date, no studies have assessed the ability of MDCT to categorize coronary lesion types. The objective of this study was to determine the accuracy of lesion type categorization by MDCT using CA as a reference standard. Patients who underwent both MDCT and CA within 2 months of each other were enrolled. MDCT and CA images were reviewed in a blinded fashion. Lesions were categorized according to the SCAI classification system (Types I-IV). The origin, proximal and middle segments of the major arteries were analyzed. Each segment comprised a data point for comparison. Analysis was performed using the Spearman Correlation Test. Four hundred eleven segments were studied, of which 110 had lesions. The lesion distribution was as follows: 35 left anterior descending (LAD), 29 circumflex (Cx), 31 right coronary artery (RCA), 2 ramus intermedius, 8 diagonal, 4 obtuse marginal and 2 left internal mammary arteries. Correlations between MDCT and CA were significant in all major vessels (LAD, Cx, RCA) (p < 0.001). The overall correlation coefficient was 0.67. Concordance was strong for lesion Types II-IV (97%) and poor for Type I (30%). High-risk coronary lesion types can be accurately categorized by MDCT. This ability may allow MDCT to play an important noninvasive role in the planning of coronary interventions.

  15. Fast and accurate CMB computations in non-flat FLRW universes

    Science.gov (United States)

    Lesgourgues, Julien; Tram, Thomas

    2014-09-01

    We present a new method for calculating CMB anisotropies in a non-flat Friedmann universe, relying on a very stable algorithm for the calculation of hyperspherical Bessel functions, that can be pushed to arbitrary precision levels. We also introduce a new approximation scheme which gradually takes over in the flat space limit and leads to significant reductions of the computation time. Our method is implemented in the Boltzmann code class. It can be used to benchmark the accuracy of the camb code in curved space, which is found to match expectations. For default precision settings, corresponding to 0.1% for scalar temperature spectra and 0.2% for scalar polarisation spectra, our code is two to three times faster, depending on curvature. We also simplify the temperature and polarisation source terms significantly, so the different contributions to the Cl 's are easy to identify inside the code.

  16. Fast and accurate CMB computations in non-flat FLRW universes

    International Nuclear Information System (INIS)

    Lesgourgues, Julien; Tram, Thomas

    2014-01-01

    We present a new method for calculating CMB anisotropies in a non-flat Friedmann universe, relying on a very stable algorithm for the calculation of hyperspherical Bessel functions, that can be pushed to arbitrary precision levels. We also introduce a new approximation scheme which gradually takes over in the flat space limit and leads to significant reductions of the computation time. Our method is implemented in the Boltzmann code class. It can be used to benchmark the accuracy of the camb code in curved space, which is found to match expectations. For default precision settings, corresponding to 0.1% for scalar temperature spectra and 0.2% for scalar polarisation spectra, our code is two to three times faster, depending on curvature. We also simplify the temperature and polarisation source terms significantly, so the different contributions to the C ℓ  's are easy to identify inside the code

  17. submitter A model for the accurate computation of the lateral scattering of protons in water

    CERN Document Server

    Bellinzona, EV; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-01-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  18. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    Science.gov (United States)

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  19. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Directory of Open Access Journals (Sweden)

    Shanis Barnard

    Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is

  20. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    Science.gov (United States)

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  1. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Science.gov (United States)

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  2. Trajectory Evaluation of Rotor-Flying Robots Using Accurate Inverse Computation Based on Algorithm Differentiation

    Directory of Open Access Journals (Sweden)

    Yuqing He

    2014-01-01

    Full Text Available Autonomous maneuvering flight control of rotor-flying robots (RFR is a challenging problem due to the highly complicated structure of its model and significant uncertainties regarding many aspects of the field. As a consequence, it is difficult in many cases to decide whether or not a flight maneuver trajectory is feasible. It is necessary to conduct an analysis of the flight maneuvering ability of an RFR prior to test flight. Our aim in this paper is to use a numerical method called algorithm differentiation (AD to solve this problem. The basic idea is to compute the internal state (i.e., attitude angles and angular rates and input profiles based on predetermined maneuvering trajectory information denoted by the outputs (i.e., positions and yaw angle and their higher-order derivatives. For this purpose, we first present a model of the RFR system and show that it is flat. We then cast the procedure for obtaining the required state/input based on the desired outputs as a static optimization problem, which is solved using AD and a derivative based optimization algorithm. Finally, we test our proposed method using a flight maneuver trajectory to verify its performance.

  3. Improved modified energy ratio method using a multi-window approach for accurate arrival picking

    Science.gov (United States)

    Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun

    2017-04-01

    To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.

  4. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation.

    Science.gov (United States)

    Gray, Alan; Harlen, Oliver G; Harris, Sarah A; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J; Pearson, Arwen R; Read, Daniel J; Richardson, Robin A

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  5. Effect of computational grid on accurate prediction of a wind turbine rotor using delayed detached-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bangga, Galih; Weihing, Pascal; Lutz, Thorsten; Krämer, Ewald [University of Stuttgart, Stuttgart (Germany)

    2017-05-15

    The present study focuses on the impact of grid for accurate prediction of the MEXICO rotor under stalled conditions. Two different blade mesh topologies, O and C-H meshes, and two different grid resolutions are tested for several time step sizes. The simulations are carried out using Delayed detached-eddy simulation (DDES) with two eddy viscosity RANS turbulence models, namely Spalart- Allmaras (SA) and Menter Shear stress transport (SST) k-ω. A high order spatial discretization, WENO (Weighted essentially non- oscillatory) scheme, is used in these computations. The results are validated against measurement data with regards to the sectional loads and the chordwise pressure distributions. The C-H mesh topology is observed to give the best results employing the SST k-ω turbulence model, but the computational cost is more expensive as the grid contains a wake block that increases the number of cells.

  6. How accurate is image-free computer navigation for hip resurfacing arthroplasty? An anatomical investigation

    International Nuclear Information System (INIS)

    Schnurr, C.; Nessler, J.; Koenig, D.P.; Meyer, C.; Schild, H.H.; Koebke, J.

    2009-01-01

    The existing studies concerning image-free navigated implantation of hip resurfacing arthroplasty are based on analysis of the accuracy of conventional biplane radiography. Studies have shown that these measurements in biplane radiography are imprecise and that precision is improved by use of three-dimensional (3D) computer tomography (CT) scans. To date, the accuracy of image-free navigation devices for hip resurfacing has not been investigated using CT scans, and anteversion accuracy has not been assessed at all. Furthermore, no study has tested the reliability of the navigation software concerning the automatically calculated implant position. The purpose of our study was to analyze the accuracy of varus-valgus and anteversion using an image-free hip resurfacing navigation device. The reliability of the software-calculated implant position was also determined. A total of 32 femoral hip resurfacing components were implanted on embalmed human femurs using an image-free navigation device. In all, 16 prostheses were implanted with the proposed position generated by the navigation software; the 16 prostheses were inserted in an optimized valgus position. A 3D CT scan was undertaken before and after operation. The difference between the measured and planned varus-valgus angle averaged 1 deg (mean±standard deviation (SD): group I, 1 deg±2 deg; group II, 1 deg±1 deg). The mean±SD difference between femoral neck anteversion and anteversion of the implant was 4 deg (group I, 4 deg±4 deg; group II, 4 deg±3 deg). The software-calculated implant position differed 7 deg±8 deg from the measured neck-shaft angle. These measured accuracies did not differ significantly between the two groups. Our study proved the high accuracy of the navigation device concerning the most important biomechanical factor: the varus-valgus angle. The software calculation of the proposed implant position has been shown to be inaccurate and needs improvement. Hence, manual adjustment of the

  7. Methods for Computing Accurate Atomic Spin Moments for Collinear and Noncollinear Magnetism in Periodic and Nonperiodic Materials.

    Science.gov (United States)

    Manz, Thomas A; Sholl, David S

    2011-12-13

    The partitioning of electron spin density among atoms in a material gives atomic spin moments (ASMs), which are important for understanding magnetic properties. We compare ASMs computed using different population analysis methods and introduce a method for computing density derived electrostatic and chemical (DDEC) ASMs. Bader and DDEC ASMs can be computed for periodic and nonperiodic materials with either collinear or noncollinear magnetism, while natural population analysis (NPA) ASMs can be computed for nonperiodic materials with collinear magnetism. Our results show Bader, DDEC, and (where applicable) NPA methods give similar ASMs, but different net atomic charges. Because they are optimized to reproduce both the magnetic field and the chemical states of atoms in a material, DDEC ASMs are especially suitable for constructing interaction potentials for atomistic simulations. We describe the computation of accurate ASMs for (a) a variety of systems using collinear and noncollinear spin DFT, (b) highly correlated materials (e.g., magnetite) using DFT+U, and (c) various spin states of ozone using coupled cluster expansions. The computed ASMs are in good agreement with available experimental results for a variety of periodic and nonperiodic materials. Examples considered include the antiferromagnetic metal organic framework Cu3(BTC)2, several ozone spin states, mono- and binuclear transition metal complexes, ferri- and ferro-magnetic solids (e.g., Fe3O4, Fe3Si), and simple molecular systems. We briefly discuss the theory of exchange-correlation functionals for studying noncollinear magnetism. A method for finding the ground state of systems with highly noncollinear magnetism is introduced. We use these methods to study the spin-orbit coupling potential energy surface of the single molecule magnet Fe4C40H52N4O12, which has highly noncollinear magnetism, and find that it contains unusual features that give a new interpretation to experimental data.

  8. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  9. Novel computational approaches characterizing knee physiotherapy

    Directory of Open Access Journals (Sweden)

    Wangdo Kim

    2014-01-01

    Full Text Available A knee joint’s longevity depends on the proper integration of structural components in an axial alignment. If just one of the components is abnormally off-axis, the biomechanical system fails, resulting in arthritis. The complexity of various failures in the knee joint has led orthopedic surgeons to select total knee replacement as a primary treatment. In many cases, this means sacrificing much of an otherwise normal joint. Here, we review novel computational approaches to describe knee physiotherapy by introducing a new dimension of foot loading to the knee axis alignment producing an improved functional status of the patient. New physiotherapeutic applications are then possible by aligning foot loading with the functional axis of the knee joint during the treatment of patients with osteoarthritis.

  10. Music Genre Classification Systems - A Computational Approach

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought...... that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular systems which use the raw audio signal as input to estimate the corresponding genre. This is in contrast...... to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...

  11. A computational approach to animal breeding.

    Science.gov (United States)

    Berger-Wolf, Tanya Y; Moore, Cristopher; Saia, Jared

    2007-02-07

    We propose a computational model of mating strategies for controlled animal breeding programs. A mating strategy in a controlled breeding program is a heuristic with some optimization criteria as a goal. Thus, it is appropriate to use the computational tools available for analysis of optimization heuristics. In this paper, we propose the first discrete model of the controlled animal breeding problem and analyse heuristics for two possible objectives: (1) breeding for maximum diversity and (2) breeding a target individual. These two goals are representative of conservation biology and agricultural livestock management, respectively. We evaluate several mating strategies and provide upper and lower bounds for the expected number of matings. While the population parameters may vary and can change the actual number of matings for a particular strategy, the order of magnitude of the number of expected matings and the relative competitiveness of the mating heuristics remains the same. Thus, our simple discrete model of the animal breeding problem provides a novel viable and robust approach to designing and comparing breeding strategies in captive populations.

  12. Computation within the auxiliary field approach

    International Nuclear Information System (INIS)

    Baeurle, S.A.

    2003-01-01

    Recently, the classical auxiliary field methodology has been developed as a new simulation technique for performing calculations within the framework of classical statistical mechanics. Since the approach suffers from a sign problem, a judicious choice of the sampling algorithm, allowing a fast statistical convergence and an efficient generation of field configurations, is of fundamental importance for a successful simulation. In this paper we focus on the computational aspects of this simulation methodology. We introduce two different types of algorithms, the single-move auxiliary field Metropolis Monte Carlo algorithm and two new classes of force-based algorithms, which enable multiple-move propagation. In addition, to further optimize the sampling, we describe a preconditioning scheme, which permits to treat each field degree of freedom individually with regard to the evolution through the auxiliary field configuration space. Finally, we demonstrate the validity and assess the competitiveness of these algorithms on a representative practical example. We believe that they may also provide an interesting possibility for enhancing the computational efficiency of other auxiliary field methodologies

  13. A computational approach to negative priming

    Science.gov (United States)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  14. Error characterization for asynchronous computations: Proxy equation approach

    Science.gov (United States)

    Sallai, Gabriella; Mittal, Ankita; Girimaji, Sharath

    2017-11-01

    Numerical techniques for asynchronous fluid flow simulations are currently under development to enable efficient utilization of massively parallel computers. These numerical approaches attempt to accurately solve time evolution of transport equations using spatial information at different time levels. The truncation error of asynchronous methods can be divided into two parts: delay dependent (EA) or asynchronous error and delay independent (ES) or synchronous error. The focus of this study is a specific asynchronous error mitigation technique called proxy-equation approach. The aim of this study is to examine these errors as a function of the characteristic wavelength of the solution. Mitigation of asynchronous effects requires that the asynchronous error be smaller than synchronous truncation error. For a simple convection-diffusion equation, proxy-equation error analysis identifies critical initial wave-number, λc. At smaller wave numbers, synchronous error are larger than asynchronous errors. We examine various approaches to increase the value of λc in order to improve the range of applicability of proxy-equation approach.

  15. Blueprinting Approach in Support of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Willem-Jan van den Heuvel

    2012-03-01

    Full Text Available Current cloud service offerings, i.e., Software-as-a-service (SaaS, Platform-as-a-service (PaaS and Infrastructure-as-a-service (IaaS offerings are often provided as monolithic, one-size-fits-all solutions and give little or no room for customization. This limits the ability of Service-based Application (SBA developers to configure and syndicate offerings from multiple SaaS, PaaS, and IaaS providers to address their application requirements. Furthermore, combining different independent cloud services necessitates a uniform description format that facilitates the design, customization, and composition. Cloud Blueprinting is a novel approach that allows SBA developers to easily design, configure and deploy virtual SBA payloads on virtual machines and resource pools on the cloud. We propose the Blueprint concept as a uniform abstract description for cloud service offerings that may cross different cloud computing layers, i.e., SaaS, PaaS and IaaS. To support developers with the SBA design and development in the cloud, this paper introduces a formal Blueprint Template for unambiguously describing a blueprint, as well as a Blueprint Lifecycle that guides developers through the manipulation, composition and deployment of different blueprints for an SBA. Finally, the empirical evaluation of the blueprinting approach within an EC’s FP7 project is reported and an associated blueprint prototype implementation is presented.

  16. Serial fusion of Eulerian and Lagrangian approaches for accurate heart-rate estimation using face videos.

    Science.gov (United States)

    Gupta, Puneet; Bhowmick, Brojeshwar; Pal, Arpan

    2017-07-01

    Camera-equipped devices are ubiquitous and proliferating in the day-to-day life. Accurate heart rate (HR) estimation from the face videos acquired from the low cost cameras in a non-contact manner, can be used in many real-world scenarios and hence, require rigorous exploration. This paper has presented an accurate and near real-time HR estimation system using these face videos. It is based on the phenomenon that the color and motion variations in the face video are closely related to the heart beat. The variations also contain the noise due to facial expressions, respiration, eye blinking and environmental factors which are handled by the proposed system. Neither Eulerian nor Lagrangian temporal signals can provide accurate HR in all the cases. The cases where Eulerian temporal signals perform spuriously are determined using a novel poorness measure and then both the Eulerian and Lagrangian temporal signals are employed for better HR estimation. Such a fusion is referred as serial fusion. Experimental results reveal that the error introduced in the proposed algorithm is 1.8±3.6 which is significantly lower than the existing well known systems.

  17. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  18. An accurate bimaxillary repositioning technique using straight locking miniplates for the mandible-first approach in bimaxillary orthognathic surgery.

    Science.gov (United States)

    Iwai, Toshinori; Omura, Susumu; Honda, Koji; Yamashita, Yosuke; Shibutani, Naoki; Fujita, Koichi; Takasu, Hikaru; Murata, Shogo; Tohnai, Iwai

    2017-01-01

    Bimaxillary orthognathic surgery has been widely performed to achieve optimal functional and esthetic outcomes in patients with dentofacial deformity. Although Le Fort I osteotomy is generally performed before bilateral sagittal split osteotomy (BSSO) in the surgery, in several situations BSSO should be performed first. However, it is very difficult during bimaxillary orthognathic surgery to maintain an accurate centric relation of the condyle and decide the ideal vertical dimension from the skull base to the mandible. We have previously applied a straight locking miniplate (SLM) technique that permits accurate superior maxillary repositioning without the need for intraoperative measurements in bimaxillary orthognathic surgery. Here we describe the application of this technique for accurate bimaxillary repositioning in a mandible-first approach where the SLMs also serve as a condylar positioning device in bimaxillary orthognathic surgery.

  19. An Accurate Computational Tool for Performance Estimation of FSO Communication Links over Weak to Strong Atmospheric Turbulent Channels

    Directory of Open Access Journals (Sweden)

    Theodore D. Katsilieris

    2017-03-01

    Full Text Available The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using

  20. A chemical approach to accurately characterize the coverage rate of gold nanoparticles

    International Nuclear Information System (INIS)

    Zhu, Xiaoli; Liu, Min; Zhang, Huihui; Wang, Haiyan; Li, Genxi

    2013-01-01

    Gold nanoparticles (AuNPs) have been widely used in many areas, and the nanoparticles usually have to be functionalized with some molecules before use. However, the information about the characterization of the functionalization of the nanoparticles is still limited or unclear, which has greatly restricted the better functionalization and application of AuNPs. Here, we propose a chemical way to accurately characterize the functionalization of AuNPs. Unlike the traditional physical methods, this method, which is based on the catalytic property of AuNPs, may give accurate coverage rate and some derivative information about the functionalization of the nanoparticles with different kinds of molecules. The performance of the characterization has been approved by adopting three independent molecules to functionalize AuNPs, including both covalent and non-covalent functionalization. Some interesting results are thereby obtained, and some are the first time to be revealed. The method may also be further developed as a useful tool for the characterization of a solid surface

  1. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    Science.gov (United States)

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  2. A chemical approach to accurately characterize the coverage rate of gold nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xiaoli; Liu, Min; Zhang, Huihui [Shanghai University, Laboratory of Biosensing Technology, School of Life Sciences (China); Wang, Haiyan [Nanjing University, State Key Laboratory of Pharmaceutical Biotechnology, Department of Biochemistry (China); Li, Genxi, E-mail: genxili@nju.edu.cn [Shanghai University, Laboratory of Biosensing Technology, School of Life Sciences (China)

    2013-09-15

    Gold nanoparticles (AuNPs) have been widely used in many areas, and the nanoparticles usually have to be functionalized with some molecules before use. However, the information about the characterization of the functionalization of the nanoparticles is still limited or unclear, which has greatly restricted the better functionalization and application of AuNPs. Here, we propose a chemical way to accurately characterize the functionalization of AuNPs. Unlike the traditional physical methods, this method, which is based on the catalytic property of AuNPs, may give accurate coverage rate and some derivative information about the functionalization of the nanoparticles with different kinds of molecules. The performance of the characterization has been approved by adopting three independent molecules to functionalize AuNPs, including both covalent and non-covalent functionalization. Some interesting results are thereby obtained, and some are the first time to be revealed. The method may also be further developed as a useful tool for the characterization of a solid surface.

  3. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    Science.gov (United States)

    Prinsloo, Jaco; Malekian, Reza

    2016-06-04

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.

  4. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    Directory of Open Access Journals (Sweden)

    Jaco Prinsloo

    2016-06-01

    Full Text Available Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID technology in combination with GPS and the Global system for Mobile communication (GSM technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz. The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.

  5. a Recursive Approach to Compute Normal Forms

    Science.gov (United States)

    HSU, L.; MIN, L. J.; FAVRETTO, L.

    2001-06-01

    Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.

  6. Sentinel nodes identified by computed tomography-lymphography accurately stage the axilla in patients with breast cancer

    International Nuclear Information System (INIS)

    Motomura, Kazuyoshi; Sumino, Hiroshi; Noguchi, Atsushi; Horinouchi, Takashi; Nakanishi, Katsuyuki

    2013-01-01

    Sentinel node biopsy often results in the identification and removal of multiple nodes as sentinel nodes, although most of these nodes could be non-sentinel nodes. This study investigated whether computed tomography-lymphography (CT-LG) can distinguish sentinel nodes from non-sentinel nodes and whether sentinel nodes identified by CT-LG can accurately stage the axilla in patients with breast cancer. This study included 184 patients with breast cancer and clinically negative nodes. Contrast agent was injected interstitially. The location of sentinel nodes was marked on the skin surface using a CT laser light navigator system. Lymph nodes located just under the marks were first removed as sentinel nodes. Then, all dyed nodes or all hot nodes were removed. The mean number of sentinel nodes identified by CT-LG was significantly lower than that of dyed and/or hot nodes removed (1.1 vs 1.8, p <0.0001). Twenty-three (12.5%) patients had ≥2 sentinel nodes identified by CT-LG removed, whereas 94 (51.1%) of patients had ≥2 dyed and/or hot nodes removed (p <0.0001). Pathological evaluation demonstrated that 47 (25.5%) of 184 patients had metastasis to at least one node. All 47 patients demonstrated metastases to at least one of the sentinel nodes identified by CT-LG. CT-LG can distinguish sentinel nodes from non-sentinel nodes, and sentinel nodes identified by CT-LG can accurately stage the axilla in patients with breast cancer. Successful identification of sentinel nodes using CT-LG may facilitate image-based diagnosis of metastasis, possibly leading to the omission of sentinel node biopsy

  7. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  8. A rapid and accurate approach for prediction of interactomes from co-elution data (PrInCE).

    Science.gov (United States)

    Stacey, R Greg; Skinnider, Michael A; Scott, Nichollas E; Foster, Leonard J

    2017-10-23

    An organism's protein interactome, or complete network of protein-protein interactions, defines the protein complexes that drive cellular processes. Techniques for studying protein complexes have traditionally applied targeted strategies such as yeast two-hybrid or affinity purification-mass spectrometry to assess protein interactions. However, given the vast number of protein complexes, more scalable methods are necessary to accelerate interaction discovery and to construct whole interactomes. We recently developed a complementary technique based on the use of protein correlation profiling (PCP) and stable isotope labeling in amino acids in cell culture (SILAC) to assess chromatographic co-elution as evidence of interacting proteins. Importantly, PCP-SILAC is also capable of measuring protein interactions simultaneously under multiple biological conditions, allowing the detection of treatment-specific changes to an interactome. Given the uniqueness and high dimensionality of co-elution data, new tools are needed to compare protein elution profiles, control false discovery rates, and construct an accurate interactome. Here we describe a freely available bioinformatics pipeline, PrInCE, for the analysis of co-elution data. PrInCE is a modular, open-source library that is computationally inexpensive, able to use label and label-free data, and capable of detecting tens of thousands of protein-protein interactions. Using a machine learning approach, PrInCE offers greatly reduced run time, more predicted interactions at the same stringency, prediction of protein complexes, and greater ease of use over previous bioinformatics tools for co-elution data. PrInCE is implemented in Matlab (version R2017a). Source code and standalone executable programs for Windows and Mac OSX are available at https://github.com/fosterlab/PrInCE , where usage instructions can be found. An example dataset and output are also provided for testing purposes. PrInCE is the first fast and easy

  9. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  10. CREATIVE APPROACHES TO COMPUTER SCIENCE EDUCATION

    Directory of Open Access Journals (Sweden)

    V. B. Raspopov

    2010-04-01

    Full Text Available Using the example of PPS «Toolbox of multimedia lessons «For Children About Chopin» we demonstrate the possibility of involving creative students in developing the software packages for educational purposes. Similar projects can be assigned to school and college students studying computer sciences and informatics, and implemented under the teachers’ supervision, as advanced assignments or thesis projects as a part of a high school course IT or Computer Sciences, a college course of Applied Scientific Research, or as a part of preparation for students’ participation in the Computer Science competitions or IT- competitions of Youth Academy of Sciences ( MAN in Russian or in Ukrainian.

  11. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Science.gov (United States)

    Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062

  12. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Directory of Open Access Journals (Sweden)

    Fan Liang

    2017-01-01

    Full Text Available Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.

  13. Accurately computing the optical pathlength difference for a michelson interferometer with minimal knowledge of the source spectrum.

    Science.gov (United States)

    Milman, Mark H

    2005-12-01

    Astrometric measurements using stellar interferometry rely on precise measurement of the central white light fringe to accurately obtain the optical pathlength difference of incoming starlight to the two arms of the interferometer. One standard approach to stellar interferometry uses a channeled spectrum to determine phases at a number of different wavelengths that are then converted to the pathlength delay. When throughput is low these channels are broadened to improve the signal-to-noise ratio. Ultimately the ability to use monochromatic models and algorithms in each of the channels to extract phase becomes problematic and knowledge of the spectrum must be incorporated to achieve the accuracies required of the astrometric measurements. To accomplish this an optimization problem is posed to estimate simultaneously the pathlength delay and spectrum of the source. Moreover, the nature of the parameterization of the spectrum that is introduced circumvents the need to solve directly for these parameters so that the optimization problem reduces to a scalar problem in just the pathlength delay variable. A number of examples are given to show the robustness of the approach.

  14. Accurately identifying patients who are excellent candidates or unsuitable for a medication: a novel approach

    Directory of Open Access Journals (Sweden)

    South C

    2017-12-01

    Full Text Available Charles South,1–3 A John Rush,4,* Thomas J Carmody,1–3 Manish K Jha,1,2 Madhukar H Trivedi1,2,*1Center for Depression Research and Clinical Care, 2Department of Psychiatry, 3Department of Clinical Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA; 4Department of Psychiatry and Behavioral Sciences, Duke-National University of Singapore, Singapore; Duke Medical School, Durham, NC, USA*These authors contributed equally to this work Objective: The objective of the study was to determine whether a unique analytic approach – as a proof of concept – could identify individual depressed outpatients (using 30 baseline clinical and demographic variables who are very likely (75% certain to not benefit (NB or to remit (R, accepting that without sufficient certainty, no prediction (NP would be made.Methods: Patients from the Combining Medications to Enhance Depression Outcomes trial treated with escitalopram (S-CIT + placebo (n=212 or S-CIT + bupropion-SR (n=206 were analyzed separately to assess replicability. For each treatment, the elastic net was used to identify subsets of predictive baseline measures for R and NB, separately. Two different equations that estimate the likelihood of remission and no benefit were developed for each patient. The ratio of these two numbers characterized likely outcomes for each patient.Results: The two treatment cells had comparable rates of remission (40% and no benefit (22%. In S-CIT + bupropion-SR, 11 were predicted NB of which 82% were correct; 26 were predicted R – 85% correct (169 had NP. For S-CIT + placebo, 13 were predicted NB – 69% correct; 44 were predicted R – 75% correct (155 were NP. Overall, 94/418 (22% patients were identified with a meaningful degree of certainty (69%–85% correct. Different variable sets with some overlap were predictive of remission and no benefit within and across treatments, despite comparable outcomes.Conclusion: In two separate analyses with two

  15. Computer science approach to quantum control

    International Nuclear Information System (INIS)

    Janzing, D.

    2006-01-01

    Whereas it is obvious that every computation process is a physical process it has hardly been recognized that many complex physical processes bear similarities to computation processes. This is in particular true for the control of physical systems on the nanoscopic level: usually the system can only be accessed via a rather limited set of elementary control operations and for many purposes only a concatenation of a large number of these basic operations will implement the desired process. This concatenation is in many cases quite similar to building complex programs from elementary steps and principles for designing algorithm may thus be a paradigm for designing control processes. For instance, one can decrease the temperature of one part of a molecule by transferring its heat to the remaining part where it is then dissipated to the environment. But the implementation of such a process involves a complex sequence of electromagnetic pulses. This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. We show that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer. Likewise, the implementation of a heat engine on the nanoscale requires to process the heat in a way that is similar to information processing and it can be shown that heat engines with maximal efficiency would be powerful computers, too. In the same way as problems in computer science can be classified by complexity classes we can also classify control problems according to their complexity. Moreover, we directly relate these complexity classes for control problems to the classes in computer science. Unifying notions of complexity in computer science and physics has therefore two aspects: on the one hand, computer science methods help to analyze the complexity of physical processes. On the other hand, reasonable

  16. Computational and Experimental Approaches to Visual Aesthetics

    Science.gov (United States)

    Brachmann, Anselm; Redies, Christoph

    2017-01-01

    Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view. PMID:29184491

  17. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.

    Science.gov (United States)

    Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe

    2017-10-16

    Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.

  18. Computational Approaches to Chemical Hazard Assessment

    Science.gov (United States)

    Luechtefeld, Thomas; Hartung, Thomas

    2018-01-01

    Summary Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models. PMID:29101769

  19. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  20. Hydrogen sulfide detection based on reflection: from a poison test approach of ancient China to single-cell accurate localization.

    Science.gov (United States)

    Kong, Hao; Ma, Zhuoran; Wang, Song; Gong, Xiaoyun; Zhang, Sichun; Zhang, Xinrong

    2014-08-05

    With the inspiration of an ancient Chinese poison test approach, we report a rapid hydrogen sulfide detection strategy in specific areas of live cells using silver needles with good spatial resolution of 2 × 2 μm(2). Besides the accurate-localization ability, this reflection-based strategy also has attractive merits of convenience and robust response when free pretreatment and short detection time are concerned. The success of endogenous H2S level evaluation in cellular cytoplasm and nuclear of human A549 cells promises the application potential of our strategy in scientific research and medical diagnosis.

  1. A machine learning approach to the accurate prediction of monitor units for a compact proton machine.

    Science.gov (United States)

    Sun, Baozhou; Lam, Dao; Yang, Deshan; Grantham, Kevin; Zhang, Tiezhi; Mutic, Sasa; Zhao, Tianyu

    2018-05-01

    Clinical treatment planning systems for proton therapy currently do not calculate monitor units (MUs) in passive scatter proton therapy due to the complexity of the beam delivery systems. Physical phantom measurements are commonly employed to determine the field-specific output factors (OFs) but are often subject to limited machine time, measurement uncertainties and intensive labor. In this study, a machine learning-based approach was developed to predict output (cGy/MU) and derive MUs, incorporating the dependencies on gantry angle and field size for a single-room proton therapy system. The goal of this study was to develop a secondary check tool for OF measurements and eventually eliminate patient-specific OF measurements. The OFs of 1754 fields previously measured in a water phantom with calibrated ionization chambers and electrometers for patient-specific fields with various range and modulation width combinations for 23 options were included in this study. The training data sets for machine learning models in three different methods (Random Forest, XGBoost and Cubist) included 1431 (~81%) OFs. Ten-fold cross-validation was used to prevent "overfitting" and to validate each model. The remaining 323 (~19%) OFs were used to test the trained models. The difference between the measured and predicted values from machine learning models was analyzed. Model prediction accuracy was also compared with that of the semi-empirical model developed by Kooy (Phys. Med. Biol. 50, 2005). Additionally, gantry angle dependence of OFs was measured for three groups of options categorized on the selection of the second scatters. Field size dependence of OFs was investigated for the measurements with and without patient-specific apertures. All three machine learning methods showed higher accuracy than the semi-empirical model which shows considerably large discrepancy of up to 7.7% for the treatment fields with full range and full modulation width. The Cubist-based solution

  2. How accurate are adolescents in portion-size estimation using the computer tool Young Adolescents' Nutrition Assessment on Computer (YANA-C)?

    Science.gov (United States)

    Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea

    2010-06-01

    Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amounts of ten commonly consumed foods (breakfast cereals, French fries, pasta, rice, apple sauce, carrots and peas, crisps, creamy velouté, red cabbage, and peas). Two procedures were followed: (1) short-term recall: adolescents (n 73) self-served their usual portions of the ten foods and estimated the amounts later the same day; (2) real-time perception: adolescents (n 128) estimated two sets (different portions) of pre-weighed portions displayed near the computer. Self-served portions were, on average, 8 % underestimated; significant underestimates were found for breakfast cereals, French fries, peas, and carrots and peas. Spearman's correlations between the self-served and estimated weights varied between 0.51 and 0.84, with an average of 0.72. The kappa statistics were moderate (>0.4) for all but one item. Pre-weighed portions were, on average, 15 % underestimated, with significant underestimates for fourteen of the twenty portions. Photographs of food items can serve as a good aid in ranking subjects; however, to assess the actual intake at a group level, underestimation must be considered.

  3. Approaching Engagement towards Human-Engaged Computing

    DEFF Research Database (Denmark)

    Niksirat, Kavous Salehzadeh; Sarcar, Sayan; Sun, Huatong

    2018-01-01

    Debates regarding the nature and role of HCI research and practice have intensified in recent years, given the ever increasingly intertwined relations between humans and technologies. The framework of Human-Engaged Computing (HEC) was proposed and developed over a series of scholarly workshops to...

  4. Computational and mathematical approaches to societal transitions

    NARCIS (Netherlands)

    J.S. Timmermans (Jos); F. Squazzoni (Flaminio); J. de Haan (Hans)

    2008-01-01

    textabstractAfter an introduction of the theoretical framework and concepts of transition studies, this article gives an overview of how structural change in social systems has been studied from various disciplinary perspectives. This overview first leads to the conclusion that computational and

  5. Heterogeneous Computing in Economics: A Simplified Approach

    DEFF Research Database (Denmark)

    Dziubinski, Matt P.; Grassi, Stefano

    This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of the C++ Accelerated Massive Parallelism recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (2011) we document a ...

  6. A Constructive Induction Approach to Computer Immunology

    Science.gov (United States)

    1999-03-01

    LVM98] Lamont, Gary B., David A. Van Veldhuizen , and Robert E Marmelstein, A Distributed Architecture for a Self-Adaptive Computer Virus...Artificial Intelligence, Herndon, VA, 1995. [MVL98] Marmelstein, Robert E., David A. Van Veldhuizen , and Gary B. Lamont. Modeling & Analysis

  7. Computational approach for a pair of bubble coalescence process

    International Nuclear Information System (INIS)

    Nurul Hasan; Zalinawati binti Zakaria

    2011-01-01

    The coalescence of bubbles has great value in mineral recovery and oil industry. In this paper, two co-axial bubbles rising in a cylinder is modelled to study the coalescence of bubbles for four computational experimental test cases. The Reynolds' (Re) number is chosen in between 8.50 and 10, Bond number, Bo ∼4.25-50, Morton number, M 0.0125-14.7. The viscosity ratio (μ r ) and density ratio (ρ r ) of liquid to bubble are kept constant (100 and 850 respectively). It was found that the Bo number has significant effect on the coalescence process for constant Re, μ r and ρ r . The bubble-bubble distance over time was validated against published experimental data. The results show that VOF approach can be used to model these phenomena accurately. The surface tension was changed to alter the Bo and density of the fluids to alter the Re and M, keeping the μ r and ρ r the same. It was found that for lower Bo, the bubble coalesce is slower and the pocket at the lower part of the leading bubble is less concave (towards downward) which is supported by the experimental data.

  8. Fast and accurate resonance assignment of small-to-large proteins by combining automated and manual approaches.

    Science.gov (United States)

    Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A; Lundström, Patrik

    2015-01-01

    The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available.

  9. Fast and accurate resonance assignment of small-to-large proteins by combining automated and manual approaches.

    Directory of Open Access Journals (Sweden)

    Markus Niklasson

    2015-01-01

    Full Text Available The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available.

  10. Computational Enzymology, a ReaxFF approach

    DEFF Research Database (Denmark)

    Corozzi, Alessandro

    This PhD project eassay is about the development of a new method to improve our understanding of enzyme catalysis with atomistic details. Currently the theory able to describe chemical systems and their reactivity is quantum mechanics (QM): electronic structure methods that use approximations of QM...... there are ordinary classical models - the molecular mechanics (MM) force-fields - that use newtonian mechanics to describe molecular systems. At this level it is possible to include the entire enzyme system still having light equations but renouncing to an easy modeling of chemical transformation during...... the simulation time. In short: on one hand we have accurate QM methods able to describe reactivity but limited in the size of the system to describe, while on the other hand we have molecular mechanics and ordinary force-fields that are virtually unlimited in size but unable to straightforwardly describe...

  11. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  12. An accurate method for computer-generating tungsten anode x-ray spectra from 30 to 140 kV.

    Science.gov (United States)

    Boone, J M; Seibert, J A

    1997-11-01

    A tungsten anode spectral model using interpolating polynomials (TASMIP) was used to compute x-ray spectra at 1 keV intervals over the range from 30 kV to 140 kV. The TASMIP is not semi-empirical and uses no physical assumptions regarding x-ray production, but rather interpolates measured constant potential x-ray spectra published by Fewell et al. [Handbook of Computed Tomography X-ray Spectra (U.S. Government Printing Office, Washington, D.C., 1981)]. X-ray output measurements (mR/mAs measured at 1 m) were made on a calibrated constant potential generator in our laboratory from 50 kV to 124 kV, and with 0-5 mm added aluminum filtration. The Fewell spectra were slightly modified (numerically hardened) and normalized based on the attenuation and output characteristics of a constant potential generator and metal-insert x-ray tube in our laboratory. Then, using the modified Fewell spectra of different kVs, the photon fluence phi at each 1 keV energy bin (E) over energies from 10 keV to 140 keV was characterized using polynomial functions of the form phi (E) = a0[E] + a1[E] kV + a2[E] kV2 + ... + a(n)[E] kVn. A total of 131 polynomial functions were used to calculate accurate x-ray spectra, each function requiring between two and four terms. The resulting TASMIP algorithm produced x-ray spectra that match both the quality and quantity characteristics of the x-ray system in our laboratory. For photon fluences above 10% of the peak fluence in the spectrum, the average percent difference (and standard deviation) between the modified Fewell spectra and the TASMIP photon fluence was -1.43% (3.8%) for the 50 kV spectrum, -0.89% (1.37%) for the 70 kV spectrum, and for the 80, 90, 100, 110, 120, 130 and 140 kV spectra, the mean differences between spectra were all less than 0.20% and the standard deviations were less than approximately 1.1%. The model was also extended to include the effects of generator-induced kV ripple. Finally, the x-ray photon fluence in the units of

  13. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  14. Machine learning and computer vision approaches for phenotypic profiling.

    Science.gov (United States)

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  15. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    Science.gov (United States)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  16. Computational approach to large quantum dynamical problems

    International Nuclear Information System (INIS)

    Friesner, R.A.; Brunet, J.P.; Wyatt, R.E.; Leforestier, C.; Binkley, S.

    1987-01-01

    The organizational structure is described for a new program that permits computations on a variety of quantum mechanical problems in chemical dynamics and spectroscopy. Particular attention is devoted to developing and using algorithms that exploit the capabilities of current vector supercomputers. A key component in this procedure is the recursive transformation of the large sparse Hamiltonian matrix into a much smaller tridiagonal matrix. An application to time-dependent laser molecule energy transfer is presented. Rate of energy deposition in the multimode molecule for systematic variations in the molecular intermode coupling parameters is emphasized

  17. A complex network approach to cloud computing

    International Nuclear Information System (INIS)

    Travieso, Gonzalo; Ruggiero, Carlos Antônio; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2016-01-01

    Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the clients’ tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlaid by Erdős–Rényi (ER) and Barabási-Albert (BA) topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of the cost of communication between the client and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter, the ER topology provides better performance than the BA for smaller average degrees and opposite behaviour for larger average degrees. With respect to cost, smaller values are found in the BA topology irrespective of the average degree. In addition, we also verified that it is easier to find good servers in ER than in BA networks. Surprisingly, balance and cost are not too much affected by the presence of communities. However, for a well-defined community network, we found that it is important to assign each server to a different community so as to achieve better performance. (paper: interdisciplinary statistical mechanics )

  18. The fundamentals of computational intelligence system approach

    CERN Document Server

    Zgurovsky, Mikhail Z

    2017-01-01

    This monograph is dedicated to the systematic presentation of main trends, technologies and methods of computational intelligence (CI). The book pays big attention to novel important CI technology- fuzzy logic (FL) systems and fuzzy neural networks (FNN). Different FNN including new class of FNN- cascade neo-fuzzy neural networks are considered and their training algorithms are described and analyzed. The applications of FNN to the forecast in macroeconomics and at stock markets are examined. The book presents the problem of portfolio optimization under uncertainty, the novel theory of fuzzy portfolio optimization free of drawbacks of classical model of Markovitz as well as an application for portfolios optimization at Ukrainian, Russian and American stock exchanges. The book also presents the problem of corporations bankruptcy risk forecasting under incomplete and fuzzy information, as well as new methods based on fuzzy sets theory and fuzzy neural networks and results of their application for bankruptcy ris...

  19. Q-P Wave traveltime computation by an iterative approach

    KAUST Repository

    Ma, Xuxin; Alkhalifah, Tariq Ali

    2013-01-01

    In this work, we present a new approach to compute anisotropic traveltime based on solving successively elliptical isotropic traveltimes. The method shows good accuracy and is very simple to implement.

  20. Integration of case study approach, project design and computer ...

    African Journals Online (AJOL)

    Integration of case study approach, project design and computer modeling in managerial accounting education ... Journal of Fundamental and Applied Sciences ... in the Laboratory of Management Accounting and Controlling Systems at the ...

  1. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  2. Accurate quantification of endogenous androgenic steroids in cattle's meat by gas chromatography mass spectrometry using a surrogate analyte approach

    International Nuclear Information System (INIS)

    Ahmadkhaniha, Reza; Shafiee, Abbas; Rastkari, Noushin; Kobarfard, Farzad

    2009-01-01

    Determination of endogenous steroids in complex matrices such as cattle's meat is a challenging task. Since endogenous steroids always exist in animal tissues, no analyte-free matrices for constructing the standard calibration line will be available, which is crucial for accurate quantification specially at trace level. Although some methods have been proposed to solve the problem, none has offered a complete solution. To this aim, a new quantification strategy was developed in this study, which is named 'surrogate analyte approach' and is based on using isotope-labeled standards instead of natural form of endogenous steroids for preparing the calibration line. In comparison with the other methods, which are currently in use for the quantitation of endogenous steroids, this approach provides improved simplicity and speed for analysis on a routine basis. The accuracy of this method is better than other methods at low concentration and comparable to the standard addition at medium and high concentrations. The method was also found to be valid according to the ICH criteria for bioanalytical methods. The developed method could be a promising approach in the field of compounds residue analysis

  3. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory.

    Science.gov (United States)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S; Shirley, Eric L; Prendergast, David

    2017-03-03

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  4. Bioinspired Computational Approach to Missing Value Estimation

    Directory of Open Access Journals (Sweden)

    Israel Edem Agbehadji

    2018-01-01

    Full Text Available Missing data occurs when values of variables in a dataset are not stored. Estimating these missing values is a significant step during the data cleansing phase of a big data management approach. The reason of missing data may be due to nonresponse or omitted entries. If these missing data are not handled properly, this may create inaccurate results during data analysis. Although a traditional method such as maximum likelihood method extrapolates missing values, this paper proposes a bioinspired method based on the behavior of birds, specifically the Kestrel bird. This paper describes the behavior and characteristics of the Kestrel bird, a bioinspired approach, in modeling an algorithm to estimate missing values. The proposed algorithm (KSA was compared with WSAMP, Firefly, and BAT algorithm. The results were evaluated using the mean of absolute error (MAE. A statistical test (Wilcoxon signed-rank test and Friedman test was conducted to test the performance of the algorithms. The results of Wilcoxon test indicate that time does not have a significant effect on the performance, and the quality of estimation between the paired algorithms was significant; the results of Friedman test ranked KSA as the best evolutionary algorithm.

  5. Meta-analytic approach to the accurate prediction of secreted virulence effectors in gram-negative bacteria

    Directory of Open Access Journals (Sweden)

    Sato Yoshiharu

    2011-11-01

    Full Text Available Abstract Background Many pathogens use a type III secretion system to translocate virulence proteins (called effectors in order to adapt to the host environment. To date, many prediction tools for effector identification have been developed. However, these tools are insufficiently accurate for producing a list of putative effectors that can be applied directly for labor-intensive experimental verification. This also suggests that important features of effectors have yet to be fully characterized. Results In this study, we have constructed an accurate approach to predicting secreted virulence effectors from Gram-negative bacteria. This consists of a support vector machine-based discriminant analysis followed by a simple criteria-based filtering. The accuracy was assessed by estimating the average number of true positives in the top-20 ranking in the genome-wide screening. In the validation, 10 sets of 20 training and 20 testing examples were randomly selected from 40 known effectors of Salmonella enterica serovar Typhimurium LT2. On average, the SVM portion of our system predicted 9.7 true positives from 20 testing examples in the top-20 of the prediction. Removal of the N-terminal instability, codon adaptation index and ProtParam indices decreased the score to 7.6, 8.9 and 7.9, respectively. These discrimination features suggested that the following characteristics of effectors had been uncovered: unstable N-terminus, non-optimal codon usage, hydrophilic, and less aliphathic. The secondary filtering process represented by coexpression analysis and domain distribution analysis further refined the average true positive counts to 12.3. We further confirmed that our system can correctly predict known effectors of P. syringae DC3000, strongly indicating its feasibility. Conclusions We have successfully developed an accurate prediction system for screening effectors on a genome-wide scale. We confirmed the accuracy of our system by external validation

  6. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    Science.gov (United States)

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Computational fluid dynamics in ventilation: Practical approach

    Science.gov (United States)

    Fontaine, J. R.

    The potential of computation fluid dynamics (CFD) for conceiving ventilation systems is shown through the simulation of five practical cases. The following examples are considered: capture of pollutants on a surface treating tank equipped with a unilateral suction slot in the presence of a disturbing air draft opposed to suction; dispersion of solid aerosols inside fume cupboards; performances comparison of two general ventilation systems in a silkscreen printing workshop; ventilation of a large open painting area; and oil fog removal inside a mechanical engineering workshop. Whereas the two first problems are analyzed through two dimensional numerical simulations, the three other cases require three dimensional modeling. For the surface treating tank case, numerical results are compared to laboratory experiment data. All simulations are carried out using EOL, a CFD software specially devised to deal with air quality problems in industrial ventilated premises. It contains many analysis tools to interpret the results in terms familiar to the industrial hygienist. Much experimental work has been engaged to validate the predictions of EOL for ventilation flows.

  8. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  9. Mutations that Cause Human Disease: A Computational/Experimental Approach

    Energy Technology Data Exchange (ETDEWEB)

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which

  10. Pepsi-SAXS: an adaptive method for rapid and accurate computation of small-angle X-ray scattering profiles.

    Science.gov (United States)

    Grudinin, Sergei; Garkavenko, Maria; Kazennov, Andrei

    2017-05-01

    A new method called Pepsi-SAXS is presented that calculates small-angle X-ray scattering profiles from atomistic models. The method is based on the multipole expansion scheme and is significantly faster compared with other tested methods. In particular, using the Nyquist-Shannon-Kotelnikov sampling theorem, the multipole expansion order is adapted to the size of the model and the resolution of the experimental data. It is argued that by using the adaptive expansion order, this method has the same quadratic dependence on the number of atoms in the model as the Debye-based approach, but with a much smaller prefactor in the computational complexity. The method has been systematically validated on a large set of over 50 models collected from the BioIsis and SASBDB databases. Using a laptop, it was demonstrated that Pepsi-SAXS is about seven, 29 and 36 times faster compared with CRYSOL, FoXS and the three-dimensional Zernike method in SAStbx, respectively, when tested on data from the BioIsis database, and is about five, 21 and 25 times faster compared with CRYSOL, FoXS and SAStbx, respectively, when tested on data from SASBDB. On average, Pepsi-SAXS demonstrates comparable accuracy in terms of χ 2 to CRYSOL and FoXS when tested on BioIsis and SASBDB profiles. Together with a small allowed variation of adjustable parameters, this demonstrates the effectiveness of the method. Pepsi-SAXS is available at http://team.inria.fr/nano-d/software/pepsi-saxs.

  11. Convergence Analysis of a Class of Computational Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Junfeng Chen

    2013-01-01

    Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.

  12. An Improved Approach for Accurate and Efficient Measurement of Common Carotid Artery Intima-Media Thickness in Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Qiang Li

    2014-01-01

    Full Text Available The intima-media thickness (IMT of common carotid artery (CCA can serve as an important indicator for the assessment of cardiovascular diseases (CVDs. In this paper an improved approach for automatic IMT measurement with low complexity and high accuracy is presented. 100 ultrasound images from 100 patients were tested with the proposed approach. The ground truth (GT of the IMT was manually measured for six times and averaged, while the automatic segmented (AS IMT was computed by the algorithm proposed in this paper. The mean difference ± standard deviation between AS and GT IMT is 0.0231 ± 0.0348 mm, and the correlation coefficient between them is 0.9629. The computational time is 0.3223 s per image with MATLAB under Windows XP on an Intel Core 2 Duo CPU E7500 @2.93 GHz. The proposed algorithm has the potential to achieve real-time measurement under Visual Studio.

  13. A deep learning approach to estimate stress distribution: a fast and accurate surrogate of finite-element analysis.

    Science.gov (United States)

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-01-01

    Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).

  14. An Integrated Computer-Aided Approach for Environmental Studies

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Chen, Fei; Jaksland, Cecilia

    1997-01-01

    A general framework for an integrated computer-aided approach to solve process design, control, and environmental problems simultaneously is presented. Physicochemical properties and their relationships to the molecular structure play an important role in the proposed integrated approach. The sco...... and applicability of the integrated approach is highlighted through examples involving estimation of properties and environmental pollution prevention. The importance of mixture effects on some environmentally important properties is also demonstrated....

  15. DisoMCS: Accurately Predicting Protein Intrinsically Disordered Regions Using a Multi-Class Conservative Score Approach.

    Directory of Open Access Journals (Sweden)

    Zhiheng Wang

    Full Text Available The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database.The DisoMCS is available at http://cal.tongji.edu.cn/disorder/.

  16. Original computer aided support system for safe and accurate implant placement—Collaboration with an university originated venture company

    Directory of Open Access Journals (Sweden)

    Taiji Sohmura

    2010-08-01

    Two clinical cases with implant placement on the three lower molars by flap operation using bone supported surgical guide and flapless operation with teeth supported surgical guide and immediate loading with provisional prostheses prepared beforehand are introduced. The present simulation and drilling support using the surgical guide may help to perform safe and accurate implant surgery.

  17. A third order accurate Lagrangian finite element scheme for the computation of generalized molecular stress function fluids

    DEFF Research Database (Denmark)

    Fasano, Andrea; Rasmussen, Henrik K.

    2017-01-01

    A third order accurate, in time and space, finite element scheme for the numerical simulation of three- dimensional time-dependent flow of the molecular stress function type of fluids in a generalized formu- lation is presented. The scheme is an extension of the K-BKZ Lagrangian finite element me...

  18. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    Science.gov (United States)

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  19. Digging deeper on "deep" learning: A computational ecology approach.

    Science.gov (United States)

    Buscema, Massimo; Sacco, Pier Luigi

    2017-01-01

    We propose an alternative approach to "deep" learning that is based on computational ecologies of structurally diverse artificial neural networks, and on dynamic associative memory responses to stimuli. Rather than focusing on massive computation of many different examples of a single situation, we opt for model-based learning and adaptive flexibility. Cross-fertilization of learning processes across multiple domains is the fundamental feature of human intelligence that must inform "new" artificial intelligence.

  20. Computational experiment approach to advanced secondary mathematics curriculum

    CERN Document Server

    Abramovich, Sergei

    2014-01-01

    This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...

  1. Prostate cancer nodal oligometastasis accurately assessed using prostate-specific membrane antigen positron emission tomography-computed tomography and confirmed histologically following robotic-assisted lymph node dissection.

    Science.gov (United States)

    O'Kane, Dermot B; Lawrentschuk, Nathan; Bolton, Damien M

    2016-01-01

    We herein present a case of a 76-year-old gentleman, where prostate-specific membrane antigen positron emission tomography-computed tomography (PSMA PET-CT) was used to accurately detect prostate cancer (PCa), pelvic lymph node (LN) metastasis in the setting of biochemical recurrence following definitive treatment for PCa. The positive PSMA PET-CT result was confirmed with histological examination of the involved pelvic LNs following pelvic LN dissection.

  2. Prostate cancer nodal oligometastasis accurately assessed using prostate-specific membrane antigen positron emission tomography-computed tomography and confirmed histologically following robotic-assisted lymph node dissection

    Directory of Open Access Journals (Sweden)

    Dermot B O′Kane

    2016-01-01

    Full Text Available We herein present a case of a 76-year-old gentleman, where prostate-specific membrane antigen positron emission tomography-computed tomography (PSMA PET-CT was used to accurately detect prostate cancer (PCa, pelvic lymph node (LN metastasis in the setting of biochemical recurrence following definitive treatment for PCa. The positive PSMA PET-CT result was confirmed with histological examination of the involved pelvic LNs following pelvic LN dissection.

  3. Computational biomechanics for medicine new approaches and new applications

    CERN Document Server

    Miller, Karol; Wittek, Adam; Nielsen, Poul

    2015-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologiesand advancements. Thisvolumecomprises twelve of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, France, Spain and Switzerland. Some of the interesting topics discussed are:real-time simulations; growth and remodelling of soft tissues; inverse and meshless solutions; medical image analysis; and patient-specific solid mechanics simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  4. DB2: a probabilistic approach for accurate detection of tandem duplication breakpoints using paired-end reads.

    Science.gov (United States)

    Yavaş, Gökhan; Koyutürk, Mehmet; Gould, Meetha P; McMahon, Sarah; LaFramboise, Thomas

    2014-03-05

    With the advent of paired-end high throughput sequencing, it is now possible to identify various types of structural variation on a genome-wide scale. Although many methods have been proposed for structural variation detection, most do not provide precise boundaries for identified variants. In this paper, we propose a new method, Distribution Based detection of Duplication Boundaries (DB2), for accurate detection of tandem duplication breakpoints, an important class of structural variation, with high precision and recall. Our computational experiments on simulated data show that DB2 outperforms state-of-the-art methods in terms of finding breakpoints of tandem duplications, with a higher positive predictive value (precision) in calling the duplications' presence. In particular, DB2's prediction of tandem duplications is correct 99% of the time even for very noisy data, while narrowing down the space of possible breakpoints within a margin of 15 to 20 bps on the average. Most of the existing methods provide boundaries in ranges that extend to hundreds of bases with lower precision values. Our method is also highly robust to varying properties of the sequencing library and to the sizes of the tandem duplications, as shown by its stable precision, recall and mean boundary mismatch performance. We demonstrate our method's efficacy using both simulated paired-end reads, and those generated from a melanoma sample and two ovarian cancer samples. Newly discovered tandem duplications are validated using PCR and Sanger sequencing. Our method, DB2, uses discordantly aligned reads, taking into account the distribution of fragment length to predict tandem duplications along with their breakpoints on a donor genome. The proposed method fine tunes the breakpoint calls by applying a novel probabilistic framework that incorporates the empirical fragment length distribution to score each feasible breakpoint. DB2 is implemented in Java programming language and is freely available

  5. FILMPAR: A parallel algorithm designed for the efficient and accurate computation of thin film flow on functional surfaces containing micro-structure

    Science.gov (United States)

    Lee, Y. C.; Thompson, H. M.; Gaskell, P. H.

    2009-12-01

    FILMPAR is a highly efficient and portable parallel multigrid algorithm for solving a discretised form of the lubrication approximation to three-dimensional, gravity-driven, continuous thin film free-surface flow over substrates containing micro-scale topography. While generally applicable to problems involving heterogeneous and distributed features, for illustrative purposes the algorithm is benchmarked on a distributed memory IBM BlueGene/P computing platform for the case of flow over a single trench topography, enabling direct comparison with complementary experimental data and existing serial multigrid solutions. Parallel performance is assessed as a function of the number of processors employed and shown to lead to super-linear behaviour for the production of mesh-independent solutions. In addition, the approach is used to solve for the case of flow over a complex inter-connected topographical feature and a description provided of how FILMPAR could be adapted relatively simply to solve for a wider class of related thin film flow problems. Program summaryProgram title: FILMPAR Catalogue identifier: AEEL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 530 421 No. of bytes in distributed program, including test data, etc.: 1 960 313 Distribution format: tar.gz Programming language: C++ and MPI Computer: Desktop, server Operating system: Unix/Linux Mac OS X Has the code been vectorised or parallelised?: Yes. Tested with up to 128 processors RAM: 512 MBytes Classification: 12 External routines: GNU C/C++, MPI Nature of problem: Thin film flows over functional substrates containing well-defined single and complex topographical features are of enormous significance, having a wide variety of engineering

  6. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    International Nuclear Information System (INIS)

    Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad

    2012-01-01

    Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD 50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  7. Computer based approach to fatigue analysis and design

    International Nuclear Information System (INIS)

    Comstock, T.R.; Bernard, T.; Nieb, J.

    1979-01-01

    An approach is presented which uses a mini-computer based system for data acquisition, analysis and graphic displays relative to fatigue life estimation and design. Procedures are developed for identifying an eliminating damaging events due to overall duty cycle, forced vibration and structural dynamic characteristics. Two case histories, weld failures in heavy vehicles and low cycle fan blade failures, are discussed to illustrate the overall approach. (orig.) 891 RW/orig. 892 RKD [de

  8. A computational approach to evaluate the androgenic affinity of iprodione, procymidone, vinclozolin and their metabolites.

    Directory of Open Access Journals (Sweden)

    Corrado Lodovico Galli

    Full Text Available Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs and their metabolites towards the ligand binding domain (LBD of the androgen receptor (AR in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three. This evidence suggests that, in order not to over-/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy.

  9. A computational approach to evaluate the androgenic affinity of iprodione, procymidone, vinclozolin and their metabolites.

    Science.gov (United States)

    Galli, Corrado Lodovico; Sensi, Cristina; Fumagalli, Amos; Parravicini, Chiara; Marinovich, Marina; Eberini, Ivano

    2014-01-01

    Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs) and their metabolites towards the ligand binding domain (LBD) of the androgen receptor (AR) in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites) were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three). This evidence suggests that, in order not to over-/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy.

  10. Computer and Internet Addiction: Analysis and Classification of Approaches

    Directory of Open Access Journals (Sweden)

    Zaretskaya O.V.

    2017-08-01

    Full Text Available The theoretical analysis of modern research works on the problem of computer and Internet addiction is carried out. The main features of different approaches are outlined. The attempt is made to systematize researches conducted and to classify scientific approaches to the problem of Internet addiction. The author distinguishes nosological, cognitive-behavioral, socio-psychological and dialectical approaches. She justifies the need to use an approach that corresponds to the essence, goals and tasks of social psychology in the field of research as the problem of Internet addiction, and the dependent behavior in general. In the opinion of the author, this dialectical approach integrates the experience of research within the framework of the socio-psychological approach and focuses on the observed inconsistencies in the phenomenon of Internet addiction – the compensatory nature of Internet activity, when people who are interested in the Internet are in a dysfunctional life situation.

  11. Pedagogical Approaches to Teaching with Computer Simulations in Science Education

    NARCIS (Netherlands)

    Rutten, N.P.G.; van der Veen, Johan (CTIT); van Joolingen, Wouter; McBride, Ron; Searson, Michael

    2013-01-01

    For this study we interviewed 24 physics teachers about their opinions on teaching with computer simulations. The purpose of this study is to investigate whether it is possible to distinguish different types of teaching approaches. Our results indicate the existence of two types. The first type is

  12. Cloud Computing - A Unified Approach for Surveillance Issues

    Science.gov (United States)

    Rachana, C. R.; Banu, Reshma, Dr.; Ahammed, G. F. Ali, Dr.; Parameshachari, B. D., Dr.

    2017-08-01

    Cloud computing describes highly scalable resources provided as an external service via the Internet on a basis of pay-per-use. From the economic point of view, the main attractiveness of cloud computing is that users only use what they need, and only pay for what they actually use. Resources are available for access from the cloud at any time, and from any location through networks. Cloud computing is gradually replacing the traditional Information Technology Infrastructure. Securing data is one of the leading concerns and biggest issue for cloud computing. Privacy of information is always a crucial pointespecially when an individual’s personalinformation or sensitive information is beingstored in the organization. It is indeed true that today; cloud authorization systems are notrobust enough. This paper presents a unified approach for analyzing the various security issues and techniques to overcome the challenges in the cloud environment.

  13. Computer Forensics for Graduate Accountants: A Motivational Curriculum Design Approach

    Directory of Open Access Journals (Sweden)

    Grover Kearns

    2010-06-01

    Full Text Available Computer forensics involves the investigation of digital sources to acquire evidence that can be used in a court of law. It can also be used to identify and respond to threats to hosts and systems. Accountants use computer forensics to investigate computer crime or misuse, theft of trade secrets, theft of or destruction of intellectual property, and fraud. Education of accountants to use forensic tools is a goal of the AICPA (American Institute of Certified Public Accountants. Accounting students, however, may not view information technology as vital to their career paths and need motivation to acquire forensic knowledge and skills. This paper presents a curriculum design methodology for teaching graduate accounting students computer forensics. The methodology is tested using perceptions of the students about the success of the methodology and their acquisition of forensics knowledge and skills. An important component of the pedagogical approach is the use of an annotated list of over 50 forensic web-based tools.

  14. Cloud computing approaches to accelerate drug discovery value chain.

    Science.gov (United States)

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  15. CAFE: A Computer Tool for Accurate Simulation of the Regulatory Pool Fire Environment for Type B Packages

    International Nuclear Information System (INIS)

    Gritzo, L.A.; Koski, J.A.; Suo-Anttila, A.J.

    1999-01-01

    The Container Analysis Fire Environment computer code (CAFE) is intended to provide Type B package designers with an enhanced engulfing fire boundary condition when combined with the PATRAN/P-Thermal commercial code. Historically an engulfing fire boundary condition has been modeled as σT 4 where σ is the Stefan-Boltzman constant, and T is the fire temperature. The CAFE code includes the necessary chemistry, thermal radiation, and fluid mechanics to model an engulfing fire. Effects included are the local cooling of gases that form a protective boundary layer that reduces the incoming radiant heat flux to values lower than expected from a simple σT 4 model. In addition, the effect of object shape on mixing that may increase the local fire temperature is included. Both high and low temperature regions that depend upon the local availability of oxygen are also calculated. Thus the competing effects that can both increase and decrease the local values of radiant heat flux are included in a reamer that is not predictable a-priori. The CAFE package consists of a group of computer subroutines that can be linked to workstation-based thermal analysis codes in order to predict package performance during regulatory and other accident fire scenarios

  16. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations

    KAUST Repository

    Harb, Moussab

    2015-01-01

    Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.

  17. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations

    KAUST Repository

    Harb, Moussab

    2015-08-26

    Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.

  18. A computational approach to chemical etiologies of diabetes

    DEFF Research Database (Denmark)

    Audouze, Karine Marie Laure; Brunak, Søren; Grandjean, Philippe

    2013-01-01

    Computational meta-analysis can link environmental chemicals to genes and proteins involved in human diseases, thereby elucidating possible etiologies and pathogeneses of non-communicable diseases. We used an integrated computational systems biology approach to examine possible pathogenetic...... linkages in type 2 diabetes (T2D) through genome-wide associations, disease similarities, and published empirical evidence. Ten environmental chemicals were found to be potentially linked to T2D, the highest scores were observed for arsenic, 2,3,7,8-tetrachlorodibenzo-p-dioxin, hexachlorobenzene...

  19. HPC Institutional Computing Project: W15_lesreactiveflow KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David Bradley [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Waters, Jiajia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-05

    KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5th generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media

  20. Efficient and Adaptive Methods for Computing Accurate Potential Surfaces for Quantum Nuclear Effects: Applications to Hydrogen-Transfer Reactions.

    Science.gov (United States)

    DeGregorio, Nicole; Iyengar, Srinivasan S

    2018-01-09

    We present two sampling measures to gauge critical regions of potential energy surfaces. These sampling measures employ (a) the instantaneous quantum wavepacket density, an approximation to the (b) potential surface, its (c) gradients, and (d) a Shannon information theory based expression that estimates the local entropy associated with the quantum wavepacket. These four criteria together enable a directed sampling of potential surfaces that appears to correctly describe the local oscillation frequencies, or the local Nyquist frequency, of a potential surface. The sampling functions are then utilized to derive a tessellation scheme that discretizes the multidimensional space to enable efficient sampling of potential surfaces. The sampled potential surface is then combined with four different interpolation procedures, namely, (a) local Hermite curve interpolation, (b) low-pass filtered Lagrange interpolation, (c) the monomial symmetrization approximation (MSA) developed by Bowman and co-workers, and (d) a modified Shepard algorithm. The sampling procedure and the fitting schemes are used to compute (a) potential surfaces in highly anharmonic hydrogen-bonded systems and (b) study hydrogen-transfer reactions in biogenic volatile organic compounds (isoprene) where the transferring hydrogen atom is found to demonstrate critical quantum nuclear effects. In the case of isoprene, the algorithm discussed here is used to derive multidimensional potential surfaces along a hydrogen-transfer reaction path to gauge the effect of quantum-nuclear degrees of freedom on the hydrogen-transfer process. Based on the decreased computational effort, facilitated by the optimal sampling of the potential surfaces through the use of sampling functions discussed here, and the accuracy of the associated potential surfaces, we believe the method will find great utility in the study of quantum nuclear dynamics problems, of which application to hydrogen-transfer reactions and hydrogen

  1. Noncontrast computed tomography can predict the outcome of shockwave lithotripsy via accurate stone measurement and abdominal fat distribution determination

    Directory of Open Access Journals (Sweden)

    Jiun-Hung Geng

    2015-01-01

    Full Text Available Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD were analyzed. In all, 197 (60% were classified as stone-free and 132 (40% as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA, abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals: 9.49 (3.72–24.20, 2.25 (1.22–4.14, 2.20 (1.10–4.40, and 2.89 (1.35–6.21 respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these

  2. Noncontrast computed tomography can predict the outcome of shockwave lithotripsy via accurate stone measurement and abdominal fat distribution determination.

    Science.gov (United States)

    Geng, Jiun-Hung; Tu, Hung-Pin; Shih, Paul Ming-Chen; Shen, Jung-Tsung; Jang, Mei-Yu; Wu, Wen-Jen; Li, Ching-Chia; Chou, Yii-Her; Juan, Yung-Shun

    2015-01-01

    Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL) has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT) to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD) were analyzed. In all, 197 (60%) were classified as stone-free and 132 (40%) as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA), abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals): 9.49 (3.72-24.20), 2.25 (1.22-4.14), 2.20 (1.10-4.40), and 2.89 (1.35-6.21) respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these predictors for selecting

  3. WSRC approach to validation of criticality safety computer codes

    International Nuclear Information System (INIS)

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K eff ) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236 U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed

  4. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    Science.gov (United States)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  5. A SURVEY ON DOCUMENT CLUSTERING APPROACH FOR COMPUTER FORENSIC ANALYSIS

    OpenAIRE

    Monika Raghuvanshi*, Rahul Patel

    2016-01-01

    In a forensic analysis, large numbers of files are examined. Much of the information comprises of in unstructured format, so it’s quite difficult task for computer forensic to perform such analysis. That’s why to do the forensic analysis of document within a limited period of time require a special approach such as document clustering. This paper review different document clustering algorithms methodologies for example K-mean, K-medoid, single link, complete link, average link in accorandance...

  6. How accurate are measurements of skin-lesion depths on prebiopsy supine chest computed tomography for transthoracic needle biopsies?

    International Nuclear Information System (INIS)

    Cheung, Joo Yeon; Kim, Yookyung; Shim, Sung Shine; Lee, Jin Hwa; Chang, Jung Hyun; Ryu, Yon Ju; Lee, Rena J.

    2012-01-01

    Aim: To evaluate the accuracy of depth measurements on supine chest computed tomography (CT) for transthoracic needle biopsy (TNB). Materials and methods: We measured skin-lesion depths from the skin surface to nodules on both prebiopsy supine CT scans and CT scans obtained during cone beam CT-guided TNB in the supine (n = 29) or prone (n = 40) position in 69 patients, and analyzed the differences between the two measurements, based on patient position for the biopsy and lesion location. Results: Skin-lesion depths measured on prebiopsy supine CT scans were significantly larger than those measured on CT scans obtained during TNB in the prone position (p < 0.001; mean difference ± standard deviation (SD), 6.2 ± 5.7 mm; range, 0–18 mm), but the differences showed marginal significance in the supine position (p = 0.051; 3.5 ± 3.9 mm; 0–13 mm). Additionally, the differences were significantly larger for the upper (mean ± SD, 7.8 ± 5.7 mm) and middle (10.1 ± 6.5 mm) lung zones than for the lower lung zones (3.1 ± 3.3 mm) in the prone position (p = 0.011), and were larger for the upper lung zone (4.6 ± 5.0 mm) than for the middle (2.4 ± 2.0 mm) and lower (2.3 ± 2.3 mm) lung zones in the supine position (p = 0.004). Conclusions: Skin-lesion depths measured on prebiopsy supine chest CT scans were inaccurate for TNB in the prone position, particularly for nodules in the upper and middle lung zones.

  7. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    Science.gov (United States)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  8. Computational Approaches for Integrative Analysis of the Metabolome and Microbiome

    Directory of Open Access Journals (Sweden)

    Jasmine Chong

    2017-11-01

    Full Text Available The study of the microbiome, the totality of all microbes inhabiting the host or an environmental niche, has experienced exponential growth over the past few years. The microbiome contributes functional genes and metabolites, and is an important factor for maintaining health. In this context, metabolomics is increasingly applied to complement sequencing-based approaches (marker genes or shotgun metagenomics to enable resolution of microbiome-conferred functionalities associated with health. However, analyzing the resulting multi-omics data remains a significant challenge in current microbiome studies. In this review, we provide an overview of different computational approaches that have been used in recent years for integrative analysis of metabolome and microbiome data, ranging from statistical correlation analysis to metabolic network-based modeling approaches. Throughout the process, we strive to present a unified conceptual framework for multi-omics integration and interpretation, as well as point out potential future directions.

  9. A Computer Vision Approach to Identify Einstein Rings and Arcs

    Science.gov (United States)

    Lee, Chien-Hsiu

    2017-03-01

    Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.

  10. SPINET: A Parallel Computing Approach to Spine Simulations

    Directory of Open Access Journals (Sweden)

    Peter G. Kropf

    1996-01-01

    Full Text Available Research in scientitic programming enables us to realize more and more complex applications, and on the other hand, application-driven demands on computing methods and power are continuously growing. Therefore, interdisciplinary approaches become more widely used. The interdisciplinary SPINET project presented in this article applies modern scientific computing tools to biomechanical simulations: parallel computing and symbolic and modern functional programming. The target application is the human spine. Simulations of the spine help us to investigate and better understand the mechanisms of back pain and spinal injury. Two approaches have been used: the first uses the finite element method for high-performance simulations of static biomechanical models, and the second generates a simulation developmenttool for experimenting with different dynamic models. A finite element program for static analysis has been parallelized for the MUSIC machine. To solve the sparse system of linear equations, a conjugate gradient solver (iterative method and a frontal solver (direct method have been implemented. The preprocessor required for the frontal solver is written in the modern functional programming language SML, the solver itself in C, thus exploiting the characteristic advantages of both functional and imperative programming. The speedup analysis of both solvers show very satisfactory results for this irregular problem. A mixed symbolic-numeric environment for rigid body system simulations is presented. It automatically generates C code from a problem specification expressed by the Lagrange formalism using Maple.

  11. Computer-oriented approach to fault-tree construction

    International Nuclear Information System (INIS)

    Salem, S.L.; Apostolakis, G.E.; Okrent, D.

    1976-11-01

    A methodology for systematically constructing fault trees for general complex systems is developed and applied, via the Computer Automated Tree (CAT) program, to several systems. A means of representing component behavior by decision tables is presented. The method developed allows the modeling of components with various combinations of electrical, fluid and mechanical inputs and outputs. Each component can have multiple internal failure mechanisms which combine with the states of the inputs to produce the appropriate output states. The generality of this approach allows not only the modeling of hardware, but human actions and interactions as well. A procedure for constructing and editing fault trees, either manually or by computer, is described. The techniques employed result in a complete fault tree, in standard form, suitable for analysis by current computer codes. Methods of describing the system, defining boundary conditions and specifying complex TOP events are developed in order to set up the initial configuration for which the fault tree is to be constructed. The approach used allows rapid modifications of the decision tables and systems to facilitate the analysis and comparison of various refinements and changes in the system configuration and component modeling

  12. Understanding Plant Nitrogen Metabolism through Metabolomics and Computational Approaches

    Directory of Open Access Journals (Sweden)

    Perrin H. Beatty

    2016-10-01

    Full Text Available A comprehensive understanding of plant metabolism could provide a direct mechanism for improving nitrogen use efficiency (NUE in crops. One of the major barriers to achieving this outcome is our poor understanding of the complex metabolic networks, physiological factors, and signaling mechanisms that affect NUE in agricultural settings. However, an exciting collection of computational and experimental approaches has begun to elucidate whole-plant nitrogen usage and provides an avenue for connecting nitrogen-related phenotypes to genes. Herein, we describe how metabolomics, computational models of metabolism, and flux balance analysis have been harnessed to advance our understanding of plant nitrogen metabolism. We introduce a model describing the complex flow of nitrogen through crops in a real-world agricultural setting and describe how experimental metabolomics data, such as isotope labeling rates and analyses of nutrient uptake, can be used to refine these models. In summary, the metabolomics/computational approach offers an exciting mechanism for understanding NUE that may ultimately lead to more effective crop management and engineered plants with higher yields.

  13. A comparative approach to closed-loop computation.

    Science.gov (United States)

    Roth, E; Sponberg, S; Cowan, N J

    2014-04-01

    Neural computation is inescapably closed-loop: the nervous system processes sensory signals to shape motor output, and motor output consequently shapes sensory input. Technological advances have enabled neuroscientists to close, open, and alter feedback loops in a wide range of experimental preparations. The experimental capability of manipulating the topology-that is, how information can flow between subsystems-provides new opportunities to understand the mechanisms and computations underlying behavior. These experiments encompass a spectrum of approaches from fully open-loop, restrained preparations to the fully closed-loop character of free behavior. Control theory and system identification provide a clear computational framework for relating these experimental approaches. We describe recent progress and new directions for translating experiments at one level in this spectrum to predictions at another level. Operating across this spectrum can reveal new understanding of how low-level neural mechanisms relate to high-level function during closed-loop behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Computational approaches in the design of synthetic receptors - A review.

    Science.gov (United States)

    Cowen, Todd; Karim, Kal; Piletsky, Sergey

    2016-09-14

    The rational design of molecularly imprinted polymers (MIPs) has been a major contributor to their reputation as "plastic antibodies" - high affinity robust synthetic receptors which can be optimally designed, and produced for a much reduced cost than their biological equivalents. Computational design has become a routine procedure in the production of MIPs, and has led to major advances in functional monomer screening, selection of cross-linker and solvent, optimisation of monomer(s)-template ratio and selectivity analysis. In this review the various computational methods will be discussed with reference to all the published relevant literature since the end of 2013, with each article described by the target molecule, the computational approach applied (whether molecular mechanics/molecular dynamics, semi-empirical quantum mechanics, ab initio quantum mechanics (Hartree-Fock, Møller-Plesset, etc.) or DFT) and the purpose for which they were used. Detailed analysis is given to novel techniques including analysis of polymer binding sites, the use of novel screening programs and simulations of MIP polymerisation reaction. The further advances in molecular modelling and computational design of synthetic receptors in particular will have serious impact on the future of nanotechnology and biotechnology, permitting the further translation of MIPs into the realms of analytics and medical technology. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Analytical and computational approaches to define the Aspergillus niger secretome

    Energy Technology Data Exchange (ETDEWEB)

    Tsang, Adrian; Butler, Gregory D.; Powlowski, Justin; Panisko, Ellen A.; Baker, Scott E.

    2009-03-01

    We used computational and mass spectrometric approaches to characterize the Aspergillus niger secretome. The 11,200 gene models predicted in the genome of A. niger strain ATCC 1015 were the data source for the analysis. Depending on the computational methods used, 691 to 881 proteins were predicted to be secreted proteins. We cultured A. niger in six different media and analyzed the extracellular proteins produced using mass spectrometry. A total of 222 proteins were identified, with 39 proteins expressed under all six conditions and 74 proteins expressed under only one condition. The secreted proteins identified by mass spectrometry were used to guide the correction of about 20 gene models. Additional analysis focused on extracellular enzymes of interest for biomass processing. Of the 63 glycoside hydrolases predicted to be capable of hydrolyzing cellulose, hemicellulose or pectin, 94% of the exo-acting enzymes and only 18% of the endo-acting enzymes were experimentally detected.

  16. Identifying Pathogenicity Islands in Bacterial Pathogenomics Using Computational Approaches

    Directory of Open Access Journals (Sweden)

    Dongsheng Che

    2014-01-01

    Full Text Available High-throughput sequencing technologies have made it possible to study bacteria through analyzing their genome sequences. For instance, comparative genome sequence analyses can reveal the phenomenon such as gene loss, gene gain, or gene exchange in a genome. By analyzing pathogenic bacterial genomes, we can discover that pathogenic genomic regions in many pathogenic bacteria are horizontally transferred from other bacteria, and these regions are also known as pathogenicity islands (PAIs. PAIs have some detectable properties, such as having different genomic signatures than the rest of the host genomes, and containing mobility genes so that they can be integrated into the host genome. In this review, we will discuss various pathogenicity island-associated features and current computational approaches for the identification of PAIs. Existing pathogenicity island databases and related computational resources will also be discussed, so that researchers may find it to be useful for the studies of bacterial evolution and pathogenicity mechanisms.

  17. Fast reactor safety and computational thermo-fluid dynamics approaches

    International Nuclear Information System (INIS)

    Ninokata, Hisashi; Shimizu, Takeshi

    1993-01-01

    This article provides a brief description of the safety principle on which liquid metal cooled fast breeder reactors (LMFBRs) is based and the roles of computations in the safety practices. A number of thermohydraulics models have been developed to date that successfully describe several of the important types of fluids and materials motion encountered in the analysis of postulated accidents in LMFBRs. Most of these models use a mixture of implicit and explicit numerical solution techniques in solving a set of conservation equations formulated in Eulerian coordinates, with special techniques included to specific situations. Typical computational thermo-fluid dynamics approaches are discussed in particular areas of analyses of the physical phenomena relevant to the fuel subassembly thermohydraulics design and that involve describing the motion of molten materials in the core over a large scale. (orig.)

  18. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  19. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    Science.gov (United States)

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  20. Approaching multiphase flows from the perspective of computational fluid dynamics

    International Nuclear Information System (INIS)

    Banas, A.O.

    1992-01-01

    Thermalhydraulic simulation methodologies based on subchannel and porous-medium concepts are briefly reviewed and contrasted with the general approach of Computational Fluid Dynamics (CFD). An outline of the advanced CFD methods for single-phase turbulent flows is followed by a short discussion of the unified formulation of averaged equations for turbulent and multiphase flows. Some of the recent applications of CFD at Chalk River Laboratories are discussed, and the complementary role of CFD with regard to the established thermalhydraulic methods of analysis is indicated. (author). 8 refs

  1. A New Approach for Accurate Prediction of Liquid Loading of Directional Gas Wells in Transition Flow or Turbulent Flow

    Directory of Open Access Journals (Sweden)

    Ruiqing Ming

    2017-01-01

    Full Text Available Current common models for calculating continuous liquid-carrying critical gas velocity are established based on vertical wells and laminar flow without considering the influence of deviation angle and Reynolds number on liquid-carrying. With the increase of the directional well in transition flow or turbulent flow, the current common models cannot accurately predict the critical gas velocity of these wells. So we built a new model to predict continuous liquid-carrying critical gas velocity for directional well in transition flow or turbulent flow. It is shown from sensitivity analysis that the correction coefficient is mainly influenced by Reynolds number and deviation angle. With the increase of Reynolds number, the critical liquid-carrying gas velocity increases first and then decreases. And with the increase of deviation angle, the critical liquid-carrying gas velocity gradually decreases. It is indicated from the case calculation analysis that the calculation error of this new model is less than 10%, where accuracy is much higher than those of current common models. It is demonstrated that the continuous liquid-carrying critical gas velocity of directional well in transition flow or turbulent flow can be predicted accurately by using this new model.

  2. Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations

    International Nuclear Information System (INIS)

    Khan, Junaid Ali; Raja, Muhammad Asif Zahoor; Qureshi, Ijaz Mansoor

    2011-01-01

    We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed. (general)

  3. A Dynamic Bayesian Approach to Computational Laban Shape Quality Analysis

    Directory of Open Access Journals (Sweden)

    Dilip Swaminathan

    2009-01-01

    kinesiology. LMA (especially Effort/Shape emphasizes how internal feelings and intentions govern the patterning of movement throughout the whole body. As we argue, a complex understanding of intention via LMA is necessary for human-computer interaction to become embodied in ways that resemble interaction in the physical world. We thus introduce a novel, flexible Bayesian fusion approach for identifying LMA Shape qualities from raw motion capture data in real time. The method uses a dynamic Bayesian network (DBN to fuse movement features across the body and across time and as we discuss can be readily adapted for low-cost video. It has delivered excellent performance in preliminary studies comprising improvisatory movements. Our approach has been incorporated in Response, a mixed-reality environment where users interact via natural, full-body human movement and enhance their bodily-kinesthetic awareness through immersive sound and light feedback, with applications to kinesiology training, Parkinson's patient rehabilitation, interactive dance, and many other areas.

  4. Fast and accurate approaches for large-scale, automated mapping of food diaries on food composition tables

    DEFF Research Database (Denmark)

    Lamarine, Marc; Hager, Jörg; Saris, Wim H M

    2018-01-01

    the EuroFIR resource. Two approaches were tested: the first was based solely on food name similarity (fuzzy matching). The second used a machine learning approach (C5.0 classifier) combining both fuzzy matching and food energy. We tested mapping food items using their original names and also an English...... not lead to any improvements compared to the fuzzy matching. However, it could increase substantially the recall rate for food items without any clear equivalent in the FCTs (+7 and +20% when mapping items using their original or English-translated names). Our approaches have been implemented as R packages...... and are freely available from GitHub. Conclusion: This study is the first to provide automated approaches for large-scale food item mapping onto FCTs. We demonstrate that both high precision and recall can be achieved. Our solutions can be used with any FCT and do not require any programming background...

  5. The advantage of the three dimensional computed tomographic (3 D-CT for ensuring accurate bone incision in sagittal split ramus osteotomy

    Directory of Open Access Journals (Sweden)

    Coen Pramono D

    2005-03-01

    Full Text Available Functional and aesthetic dysgnathia surgery requires accurate pre-surgical planning, including the surgical technique to be used related with the difference of anatomical structures amongst individuals. Programs that simulate the surgery become increasingly important. This can be mediated by using a surgical model, conventional x-rays as panoramic, cephalometric projections and another sophisticated method such as a three dimensional computed tomography (3 D-CT. A patient who had undergone double jaw surgeries with difficult anatomical landmarks was presented. In this case the mandible foramens were seen highly relatively related to the sigmoid notches. Therefore, ensuring the bone incisions in sagittal split was presumed to be difficult. A 3D-CT was made and considered to be very helpful in supporting the pre-operative diagnostic.

  6. Experiment and theory at the convergence limit: accurate equilibrium structure of picolinic acid by gas-phase electron diffraction and coupled-cluster computations.

    Science.gov (United States)

    Vogt, Natalja; Marochkin, Ilya I; Rykov, Anatolii N

    2018-04-18

    The accurate molecular structure of picolinic acid has been determined from experimental data and computed at the coupled cluster level of theory. Only one conformer with the O[double bond, length as m-dash]C-C-N and H-O-C[double bond, length as m-dash]O fragments in antiperiplanar (ap) positions, ap-ap, has been detected under conditions of the gas-phase electron diffraction (GED) experiment (Tnozzle = 375(3) K). The semiexperimental equilibrium structure, rsee, of this conformer has been derived from the GED data taking into account the anharmonic vibrational effects estimated from the ab initio force field. The equilibrium structures of the two lowest-energy conformers, ap-ap and ap-sp (with the synperiplanar H-O-C[double bond, length as m-dash]O fragment), have been fully optimized at the CCSD(T)_ae level of theory in conjunction with the triple-ζ basis set (cc-pwCVTZ). The quality of the optimized structures has been improved due to extrapolation to the quadruple-ζ basis set. The high accuracy of both GED determination and CCSD(T) computations has been disclosed by a correct comparison of structures having the same physical meaning. The ap-ap conformer has been found to be stabilized by the relatively strong NH-O hydrogen bond of 1.973(27) Å (GED) and predicted to be lower in energy by 16 kJ mol-1 with respect to the ap-sp conformer without a hydrogen bond. The influence of this bond on the structure of picolinic acid has been analyzed within the Natural Bond Orbital model. The possibility of the decarboxylation of picolinic acid has been considered in the GED analysis, but no significant amounts of pyridine and carbon dioxide could be detected. To reveal the structural changes reflecting the mesomeric and inductive effects due to the carboxylic substituent, the accurate structure of pyridine has been also computed at the CCSD(T)_ae level with basis sets from triple- to 5-ζ quality. The comprehensive structure computations for pyridine as well as for

  7. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    Science.gov (United States)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  8. Novel computational approaches for the analysis of cosmic magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Saveliev, Andrey [Universitaet Hamburg, Hamburg (Germany); Keldysh Institut, Moskau (Russian Federation)

    2016-07-01

    In order to give a consistent picture of cosmic, i.e. galactic and extragalactic, magnetic fields, different approaches are possible and often even necessary. Here we present three of them: First, a semianalytic analysis of the time evolution of primordial magnetic fields from which their properties and, subsequently, the nature of present-day intergalactic magnetic fields may be deduced. Second, the use of high-performance computing infrastructure by developing powerful algorithms for (magneto-)hydrodynamic simulations and applying them to astrophysical problems. We are currently developing a code which applies kinetic schemes in massive parallel computing on high performance multiprocessor systems in a new way to calculate both hydro- and electrodynamic quantities. Finally, as a third approach, astroparticle physics might be used as magnetic fields leave imprints of their properties on charged particles transversing them. Here we focus on electromagnetic cascades by developing a software based on CRPropa which simulates the propagation of particles from such cascades through the intergalactic medium in three dimensions. This may in particular be used to obtain information about the helicity of extragalactic magnetic fields.

  9. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  10. Fast and Accurate Approaches for Large-Scale, Automated Mapping of Food Diaries on Food Composition Tables

    Directory of Open Access Journals (Sweden)

    Marc Lamarine

    2018-05-01

    Full Text Available Aim of Study: The use of weighed food diaries in nutritional studies provides a powerful method to quantify food and nutrient intakes. Yet, mapping these records onto food composition tables (FCTs is a challenging, time-consuming and error-prone process. Experts make this effort manually and no automation has been previously proposed. Our study aimed to assess automated approaches to map food items onto FCTs.Methods: We used food diaries (~170,000 records pertaining to 4,200 unique food items from the DiOGenes randomized clinical trial. We attempted to map these items onto six FCTs available from the EuroFIR resource. Two approaches were tested: the first was based solely on food name similarity (fuzzy matching. The second used a machine learning approach (C5.0 classifier combining both fuzzy matching and food energy. We tested mapping food items using their original names and also an English-translation. Top matching pairs were reviewed manually to derive performance metrics: precision (the percentage of correctly mapped items and recall (percentage of mapped items.Results: The simpler approach: fuzzy matching, provided very good performance. Under a relaxed threshold (score > 50%, this approach enabled to remap 99.49% of the items with a precision of 88.75%. With a slightly more stringent threshold (score > 63%, the precision could be significantly improved to 96.81% while keeping a recall rate > 95% (i.e., only 5% of the queried items would not be mapped. The machine learning approach did not lead to any improvements compared to the fuzzy matching. However, it could increase substantially the recall rate for food items without any clear equivalent in the FCTs (+7 and +20% when mapping items using their original or English-translated names. Our approaches have been implemented as R packages and are freely available from GitHub.Conclusion: This study is the first to provide automated approaches for large-scale food item mapping onto FCTs. We

  11. A computationally inexpensive CFD approach for small-scale biomass burners equipped with enhanced air staging

    International Nuclear Information System (INIS)

    Buchmayr, M.; Gruber, J.; Hargassner, M.; Hochenauer, C.

    2016-01-01

    Highlights: • Time efficient CFD model to predict biomass boiler performance. • Boundary conditions for numerical modeling are provided by measurements. • Tars in the product from primary combustion was considered. • Simulation results were validated by experiments on a real-scale reactor. • Very good accordance between experimental and simulation results. - Abstract: Computational Fluid Dynamics (CFD) is an upcoming technique for optimization and as a part of the design process of biomass combustion systems. An accurate simulation of biomass combustion can only be provided with high computational effort so far. This work presents an accurate, time efficient CFD approach for small-scale biomass combustion systems equipped with enhanced air staging. The model can handle the high amount of biomass tars in the primary combustion product at very low primary air ratios. Gas-phase combustion in the freeboard was performed by the Steady Flamelet Model (SFM) together with a detailed heptane combustion mechanism. The advantage of the SFM is that complex combustion chemistry can be taken into account at low computational effort because only two additional transport equations have to be solved to describe the chemistry in the reacting flow. Boundary conditions for primary combustion product composition were obtained from the fuel bed by experiments. The fuel bed data were used as fuel inlet boundary condition for the gas-phase combustion model. The numerical and experimental investigations were performed for different operating conditions and varying wood-chip moisture on a special designed real-scale reactor. The numerical predictions were validated with experimental results and a very good agreement was found. With the presented approach accurate results can be provided within 24 h using a standard Central Processing Unit (CPU) consisting of six cores. Case studies e.g. for combustion geometry improvement can be realized effectively due to the short calculation

  12. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  13. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    Science.gov (United States)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  14. A HOLISTIC APPROACH FOR INSPECTION OF CIVIL INFRASTRUCTURES BASED ON COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    C. Stentoumis

    2016-06-01

    Full Text Available In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  15. Can masses of non-experts train highly accurate image classifiers? A crowdsourcing approach to instrument segmentation in laparoscopic images.

    Science.gov (United States)

    Maier-Hein, Lena; Mersmann, Sven; Kondermann, Daniel; Bodenstedt, Sebastian; Sanchez, Alexandro; Stock, Christian; Kenngott, Hannes Gotz; Eisenmann, Mathias; Speidel, Stefanie

    2014-01-01

    Machine learning algorithms are gaining increasing interest in the context of computer-assisted interventions. One of the bottlenecks so far, however, has been the availability of training data, typically generated by medical experts with very limited resources. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. In this work, we investigate the potential of crowdsourcing for segmenting medical instruments in endoscopic image data. Our study suggests that (1) segmentations computed from annotations of multiple anonymous non-experts are comparable to those made by medical experts and (2) training data generated by the crowd is of the same quality as that annotated by medical experts. Given the speed of annotation, scalability and low costs, this implies that the scientific community might no longer need to rely on experts to generate reference or training data for certain applications. To trigger further research in endoscopic image processing, the data used in this study will be made publicly available.

  16. A Novel Edge-Map Creation Approach for Highly Accurate Pupil Localization in Unconstrained Infrared Iris Images

    Directory of Open Access Journals (Sweden)

    Vineet Kumar

    2016-01-01

    Full Text Available Iris segmentation in the iris recognition systems is a challenging task under noncooperative environments. The iris segmentation is a process of detecting the pupil, iris’s outer boundary, and eyelids in the iris image. In this paper, we propose a pupil localization method for locating the pupils in the non-close-up and frontal-view iris images that are captured under near-infrared (NIR illuminations and contain the noise, such as specular and lighting reflection spots, eyeglasses, nonuniform illumination, low contrast, and occlusions by the eyelids, eyelashes, and eyebrow hair. In the proposed method, first, a novel edge-map is created from the iris image, which is based on combining the conventional thresholding and edge detection based segmentation techniques, and then, the general circular Hough transform (CHT is used to find the pupil circle parameters in the edge-map. Our main contribution in this research is a novel edge-map creation technique, which reduces the false edges drastically in the edge-map of the iris image and makes the pupil localization in the noisy NIR images more accurate, fast, robust, and simple. The proposed method was tested with three iris databases: CASIA-Iris-Thousand (version 4.0, CASIA-Iris-Lamp (version 3.0, and MMU (version 2.0. The average accuracy of the proposed method is 99.72% and average time cost per image is 0.727 sec.

  17. A new approach for accurate mass assignment on a multi-turn time-of-flight mass spectrometer.

    Science.gov (United States)

    Hondo, Toshinobu; Jensen, Kirk R; Aoki, Jun; Toyoda, Michisato

    2017-12-01

    A simple, effective accurate mass assignment procedure for a time-of-flight mass spectrometer is desirable. External mass calibration using a mass calibration standard together with an internal mass reference (lock mass) is a common technique for mass assignment, however, using polynomial fitting can result in mass-dependent errors. By using the multi-turn time-of-flight mass spectrometer infiTOF-UHV, we were able to obtain multiple time-of-flight data from an ion monitored under several different numbers of laps that was then used to calculate a mass calibration equation. We have developed a data acquisition system that simultaneously monitors spectra at several different lap conditions with on-the-fly centroid determination and scan law estimation, which is a function of acceleration voltage, flight path, and instrumental time delay. Less than 0.9 mDa mass errors were observed for assigned mass to charge ratios ( m/z) ranging between 4 and 134 using only 40 Ar + as a reference. It was also observed that estimating the scan law on-the-fly provides excellent mass drift compensation.

  18. Suggested Approaches to the Measurement of Computer Anxiety.

    Science.gov (United States)

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  19. Computational Diagnostic: A Novel Approach to View Medical Data.

    Energy Technology Data Exchange (ETDEWEB)

    Mane, K. K. (Ketan Kirtiraj); Börner, K. (Katy)

    2007-01-01

    A transition from traditional paper-based medical records to electronic health record is largely underway. The use of electronic records offers tremendous potential to personalize patient diagnosis and treatment. In this paper, we discuss a computational diagnostic tool that uses digital medical records to help doctors gain better insight about a patient's medical condition. The paper details different interactive features of the tool which offer potential to practice evidence-based medicine and advance patient diagnosis practices. The healthcare industry is a constantly evolving domain. Research from this domain is often translated into better understanding of different medical conditions. This new knowledge often contributes towards improved diagnosis and treatment solutions for patients. But the healthcare industry lags behind to seek immediate benefits of the new knowledge as it still adheres to the traditional paper-based approach to keep track of medical records. However recently we notice a drive that promotes a transition towards electronic health record (EHR). An EHR stores patient medical records in digital format and offers potential to replace the paper health records. Earlier attempts of an EHR replicated the paper layout on the screen, representation of medical history of a patient in a graphical time-series format, interactive visualization with 2D/3D generated images from an imaging device. But an EHR can be much more than just an 'electronic view' of the paper record or a collection of images from an imaging device. In this paper, we present an EHR called 'Computational Diagnostic Tool', that provides a novel computational approach to look at patient medical data. The developed EHR system is knowledge driven and acts as clinical decision support tool. The EHR tool provides two visual views of the medical data. Dynamic interaction with data is supported to help doctors practice evidence-based decisions and make judicious

  20. Solvent effect on indocyanine dyes: A computational approach

    International Nuclear Information System (INIS)

    Bertolino, Chiara A.; Ferrari, Anna M.; Barolo, Claudia; Viscardi, Guido; Caputo, Giuseppe; Coluccia, Salvatore

    2006-01-01

    The solvatochromic behaviour of a series of indocyanine dyes (Dyes I-VIII) was investigated by quantum chemical calculations. The effect of the polymethine chain length and of the indolenine structure has been satisfactorily reproduced by semiempirical Pariser-Parr-Pople (PPP) calculations. The solvatochromism of 3,3,3',3'-tetramethyl-N,N'-diethylindocarbocyanine iodide (Dye I) has been deeply investigated within the ab initio time-dependent density functional theory (TD-DFT) approach. Dye I undergoes non-polar solvation and a linear correlation has been individuated between absorption shifts and refractive index. Computed absorption λ max and oscillator strengths obtained by TD-DFT are in good agreement with the experimental data

  1. Systems approaches to computational modeling of the oral microbiome

    Directory of Open Access Journals (Sweden)

    Dimiter V. Dimitrov

    2013-07-01

    Full Text Available Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet – oral microbiome – host mucosal transcriptome interactions. In particular we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, to human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders

  2. A computational approach to mechanistic and predictive toxicology of pesticides

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Vinggaard, Anne Marie; Hadrup, Niels

    2014-01-01

    Emerging challenges of managing and interpreting large amounts of complex biological data have given rise to the growing field of computational biology. We investigated the applicability of an integrated systems toxicology approach on five selected pesticides to get an overview of their modes...... of action in humans, to group them according to their modes of action, and to hypothesize on their potential effects on human health. We extracted human proteins associated to prochloraz, tebuconazole, epoxiconazole, procymidone, and mancozeb and enriched each protein set by using a high confidence human......, and procymidone exerted their effects mainly via interference with steroidogenesis and nuclear receptors. Prochloraz was associated to a large number of human diseases, and together with tebuconazole showed several significant associations to Testicular Dysgenesis Syndrome. Mancozeb showed a differential mode...

  3. Vehicular traffic noise prediction using soft computing approach.

    Science.gov (United States)

    Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek

    2016-12-01

    A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Non-invasive prenatal diagnosis of achondroplasia and thanatophoric dysplasia: next-generation sequencing allows for a safer, more accurate, and comprehensive approach.

    Science.gov (United States)

    Chitty, Lyn S; Mason, Sarah; Barrett, Angela N; McKay, Fiona; Lench, Nicholas; Daley, Rebecca; Jenkins, Lucy A

    2015-07-01

    Accurate prenatal diagnosis of genetic conditions can be challenging and usually requires invasive testing. Here, we demonstrate the potential of next-generation sequencing (NGS) for the analysis of cell-free DNA in maternal blood to transform prenatal diagnosis of monogenic disorders. Analysis of cell-free DNA using a PCR and restriction enzyme digest (PCR-RED) was compared with a novel NGS assay in pregnancies at risk of achondroplasia and thanatophoric dysplasia. PCR-RED was performed in 72 cases and was correct in 88.6%, inconclusive in 7% with one false negative. NGS was performed in 47 cases and was accurate in 96.2% with no inconclusives. Both approaches were used in 27 cases, with NGS giving the correct result in the two cases inconclusive with PCR-RED. NGS provides an accurate, flexible approach to non-invasive prenatal diagnosis of de novo and paternally inherited mutations. It is more sensitive than PCR-RED and is ideal when screening a gene with multiple potential pathogenic mutations. These findings highlight the value of NGS in the development of non-invasive prenatal diagnosis for other monogenic disorders. © 2015 John Wiley & Sons, Ltd.

  5. An Organic Computing Approach to Self-organising Robot Ensembles

    Directory of Open Access Journals (Sweden)

    Sebastian Albrecht von Mammen

    2016-11-01

    Full Text Available Similar to the Autonomous Computing initiative, that has mainly been advancing techniques for self-optimisation focussing on computing systems and infrastructures, Organic Computing (OC has been driving the development of system design concepts and algorithms for self-adaptive systems at large. Examples of application domains include, for instance, traffic management and control, cloud services, communication protocols, and robotic systems. Such an OC system typically consists of a potentially large set of autonomous and self-managed entities, where each entity acts with a local decision horizon. By means of cooperation of the individual entities, the behaviour of the entire ensemble system is derived. In this article, we present our work on how autonomous, adaptive robot ensembles can benefit from OC technology. Our elaborations are aligned with the different layers of an observer/controller framework which provides the foundation for the individuals' adaptivity at system design-level. Relying on an extended Learning Classifier System (XCS in combination with adequate simulation techniques, this basic system design empowers robot individuals to improve their individual and collaborative performances, e.g. by means of adapting to changing goals and conditions.Not only for the sake of generalisability, but also because of its enormous transformative potential, we stage our research in the domain of robot ensembles that are typically comprised of several quad-rotors and that organise themselves to fulfil spatial tasks such as maintenance of building facades or the collaborative search for mobile targets. Our elaborations detail the architectural concept, provide examples of individual self-optimisation as well as of the optimisation of collaborative efforts, and we show how the user can control the ensembles at multiple levels of abstraction. We conclude with a summary of our approach and an outlook on possible future steps.

  6. A computational approach to climate science education with CLIMLAB

    Science.gov (United States)

    Rose, B. E. J.

    2017-12-01

    CLIMLAB is a Python-based software toolkit for interactive, process-oriented climate modeling for use in education and research. It is motivated by the need for simpler tools and more reproducible workflows with which to "fill in the gaps" between blackboard-level theory and the results of comprehensive climate models. With CLIMLAB you can interactively mix and match physical model components, or combine simpler process models together into a more comprehensive model. I use CLIMLAB in the classroom to put models in the hands of students (undergraduate and graduate), and emphasize a hierarchical, process-oriented approach to understanding the key emergent properties of the climate system. CLIMLAB is equally a tool for climate research, where the same needs exist for more robust, process-based understanding and reproducible computational results. I will give an overview of CLIMLAB and an update on recent developments, including: a full-featured, well-documented, interactive implementation of a widely-used radiation model (RRTM) packaging with conda-forge for compiler-free (and hassle-free!) installation on Mac, Windows and Linux interfacing with xarray for i/o and graphics with gridded model data a rich and growing collection of examples and self-computing lecture notes in Jupyter notebook format

  7. Cone beam computed tomography: An accurate imaging technique in comparison with orthogonal portal imaging in intensity-modulated radiotherapy for prostate cancer

    Directory of Open Access Journals (Sweden)

    Om Prakash Gurjar

    2016-03-01

    Full Text Available Purpose: Various factors cause geometric uncertainties during prostate radiotherapy, including interfractional and intrafractional patient motions, organ motion, and daily setup errors. This may lead to increased normal tissue complications when a high dose to the prostate is administered. More-accurate treatment delivery is possible with daily imaging and localization of the prostate. This study aims to measure the shift of the prostate by using kilovoltage (kV cone beam computed tomography (CBCT after position verification by kV orthogonal portal imaging (OPI.Methods: Position verification in 10 patients with prostate cancer was performed by using OPI followed by CBCT before treatment delivery in 25 sessions per patient. In each session, OPI was performed by using an on-board imaging (OBI system and pelvic bone-to-pelvic bone matching was performed. After applying the noted shift by using OPI, CBCT was performed by using the OBI system and prostate-to-prostate matching was performed. The isocenter shifts along all three translational directions in both techniques were combined into a three-dimensional (3-D iso-displacement vector (IDV.Results: The mean (SD IDV (in centimeters calculated during the 250 imaging sessions was 0.931 (0.598, median 0.825 for OPI and 0.515 (336, median 0.43 for CBCT, p-value was less than 0.0001 which shows extremely statistical significant difference.Conclusion: Even after bone-to-bone matching by using OPI, a significant shift in prostate was observed on CBCT. This study concludes that imaging with CBCT provides a more accurate prostate localization than the OPI technique. Hence, CBCT should be chosen as the preferred imaging technique.

  8. Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent

    Science.gov (United States)

    Cruz-Roa, Angel; Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie N. C.; Tomaszewski, John; González, Fabio A.; Madabhushi, Anant

    2017-04-01

    With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners. The objective of this study was to evaluate the accuracy and robustness of a deep learning-based method to automatically identify the extent of invasive tumor on digitized images. Here, we present a new method that employs a convolutional neural network for detecting presence of invasive tumor on whole slide images. Our approach involves training the classifier on nearly 400 exemplars from multiple different sites, and scanners, and then independently validating on almost 200 cases from The Cancer Genome Atlas. Our approach yielded a Dice coefficient of 75.86%, a positive predictive value of 71.62% and a negative predictive value of 96.77% in terms of pixel-by-pixel evaluation compared to manually annotated regions of invasive ductal carcinoma.

  9. Cloud computing approaches for prediction of ligand binding poses and pathways.

    Science.gov (United States)

    Lawrenz, Morgan; Shukla, Diwakar; Pande, Vijay S

    2015-01-22

    We describe an innovative protocol for ab initio prediction of ligand crystallographic binding poses and highly effective analysis of large datasets generated for protein-ligand dynamics. We include a procedure for setup and performance of distributed molecular dynamics simulations on cloud computing architectures, a model for efficient analysis of simulation data, and a metric for evaluation of model convergence. We give accurate binding pose predictions for five ligands ranging in affinity from 7 nM to > 200 μM for the immunophilin protein FKBP12, for expedited results in cases where experimental structures are difficult to produce. Our approach goes beyond single, low energy ligand poses to give quantitative kinetic information that can inform protein engineering and ligand design.

  10. Optimal needle placement for the accurate magnetic material quantification based on uncertainty analysis in the inverse approach

    International Nuclear Information System (INIS)

    Abdallh, A; Crevecoeur, G; Dupré, L

    2010-01-01

    The measured voltage signals picked up by the needle probe method can be interpreted by a numerical method so as to identify the magnetic material properties of the magnetic circuit of an electromagnetic device. However, when solving this electromagnetic inverse problem, the uncertainties in the numerical method give rise to recovery errors since the calculated needle signals in the forward problem are sensitive to these uncertainties. This paper proposes a stochastic Cramér–Rao bound method for determining the optimal sensor placement in the experimental setup. The numerical method is computationally time efficient where the geometrical parameters need to be provided. We apply the method for the non-destructive magnetic material characterization of an EI inductor where we ascertain the optimal experiment design. This design corresponds to the highest possible resolution that can be obtained when solving the inverse problem. Moreover, the presented results are validated by comparison with the exact material characteristics. The results show that the proposed methodology is independent of the values of the material parameter so that it can be applied before solving the inverse problem, i.e. as a priori estimation stage

  11. Towards scalable quantum communication and computation: Novel approaches and realizations

    Science.gov (United States)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as

  12. A computational approach to finding novel targets for existing drugs.

    Directory of Open Access Journals (Sweden)

    Yvonne Y Li

    2011-09-01

    Full Text Available Repositioning existing drugs for new therapeutic uses is an efficient approach to drug discovery. We have developed a computational drug repositioning pipeline to perform large-scale molecular docking of small molecule drugs against protein drug targets, in order to map the drug-target interaction space and find novel interactions. Our method emphasizes removing false positive interaction predictions using criteria from known interaction docking, consensus scoring, and specificity. In all, our database contains 252 human protein drug targets that we classify as reliable-for-docking as well as 4621 approved and experimental small molecule drugs from DrugBank. These were cross-docked, then filtered through stringent scoring criteria to select top drug-target interactions. In particular, we used MAPK14 and the kinase inhibitor BIM-8 as examples where our stringent thresholds enriched the predicted drug-target interactions with known interactions up to 20 times compared to standard score thresholds. We validated nilotinib as a potent MAPK14 inhibitor in vitro (IC50 40 nM, suggesting a potential use for this drug in treating inflammatory diseases. The published literature indicated experimental evidence for 31 of the top predicted interactions, highlighting the promising nature of our approach. Novel interactions discovered may lead to the drug being repositioned as a therapeutic treatment for its off-target's associated disease, added insight into the drug's mechanism of action, and added insight into the drug's side effects.

  13. Computer-Aided Approaches for Targeting HIVgp41

    Directory of Open Access Journals (Sweden)

    William J. Allen

    2012-08-01

    Full Text Available Virus-cell fusion is the primary means by which the human immunodeficiency virus-1 (HIV delivers its genetic material into the human T-cell host. Fusion is mediated in large part by the viral glycoprotein 41 (gp41 which advances through four distinct conformational states: (i native, (ii pre-hairpin intermediate, (iii fusion active (fusogenic, and (iv post-fusion. The pre-hairpin intermediate is a particularly attractive step for therapeutic intervention given that gp41 N-terminal heptad repeat (NHR and C‑terminal heptad repeat (CHR domains are transiently exposed prior to the formation of a six-helix bundle required for fusion. Most peptide-based inhibitors, including the FDA‑approved drug T20, target the intermediate and there are significant efforts to develop small molecule alternatives. Here, we review current approaches to studying interactions of inhibitors with gp41 with an emphasis on atomic-level computer modeling methods including molecular dynamics, free energy analysis, and docking. Atomistic modeling yields a unique level of structural and energetic detail, complementary to experimental approaches, which will be important for the design of improved next generation anti-HIV drugs.

  14. Computed tomography of the lung. A pattern approach. 2. ed.

    International Nuclear Information System (INIS)

    Verschakelen, Johny A.; Wever, Walter de

    2018-01-01

    Computed Tomography of the Lung: A Pattern Approach aims to enable the reader to recognize and understand the CT signs of lung diseases and diseases with pulmonary involvement as a sound basis for diagnosis. After an introductory chapter, basic anatomy and its relevance to the interpretation of CT appearances is discussed. Advice is then provided on how to approach a CT scan of the lungs, and the different distribution and appearance patterns of disease are described. Subsequent chapters focus on the nature of these patterns, identify which diseases give rise to them, and explain how to differentiate between the diseases. The concluding chapter presents a large number of typical and less typical cases that will help the reader to practice application of the knowledge gained from the earlier chapters. Since the first edition, the book has been adapted and updated, with the inclusion of many new figures and case studies. It will be an invaluable asset both for radiologists and pulmonologists in training and for more experienced specialists wishing to update their knowledge.

  15. Optical computing - an alternate approach to trigger processing

    International Nuclear Information System (INIS)

    Cleland, W.E.

    1981-01-01

    The enormous rate reduction factors required by most ISABELLE experiments suggest that we should examine every conceivable approach to trigger processing. One approach that has not received much attention by high energy physicists is optical data processing. The past few years have seen rapid advances in optoelectronic technology, stimulated mainly by the military and the communications industry. An intriguing question is whether one can utilize this technology together with the optical computing techniques that have been developed over the past two decades to develop a rapid trigger processor for high energy physics experiments. Optical data processing is a method for performing a few very specialized operations on data which is inherently two dimensional. Typical operations are the formation of convolution or correlation integrals between the input data and information stored in the processor in the form of an optical filter. Optical processors are classed as coherent or incoherent, according to the spatial coherence of the input wavefront. Typically, in a coherent processor a laser beam is modulated with a photographic transparency which represents the input data. In an incoherent processor, the input may be an incoherently illuminated transparency, but self-luminous objects, such as an oscilloscope trace, have also been used. We consider here an incoherent processor in which the input data is converted into an optical wavefront through the excitation of an array of point sources - either light emitting diodes or injection lasers

  16. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  17. Efficient and accurate local approximations to coupled-electron pair approaches: An attempt to revive the pair natural orbital method.

    Science.gov (United States)

    Neese, Frank; Wennmohs, Frank; Hansen, Andreas

    2009-03-21

    Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Moller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol(-1). Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500

  18. A nuclear magnetic resonance based approach to accurate functional annotation of putative enzymes in the methanogen Methanosarcina acetivorans

    Directory of Open Access Journals (Sweden)

    Nikolau Basil J

    2011-06-01

    Full Text Available Abstract Background Correct annotation of function is essential if one is to take full advantage of the vast amounts of genomic sequence data. The accuracy of sequence-based functional annotations is often variable, particularly if the sequence homology to a known function is low. Indeed recent work has shown that even proteins with very high sequence identity can have different folds and functions, and therefore caution is needed in assigning functions by sequence homology in the absence of experimental validation. Experimental methods are therefore needed to efficiently evaluate annotations in a way that complements current high throughput technologies. Here, we describe the use of nuclear magnetic resonance (NMR-based ligand screening as a tool for testing functional assignments of putative enzymes that may be of variable reliability. Results The target genes for this study are putative enzymes from the methanogenic archaeon Methanosarcina acetivorans (MA that have been selected after manual genome re-annotation and demonstrate detectable in vivo expression at the level of the transcriptome. The experimental approach begins with heterologous E. coli expression and purification of individual MA gene products. An NMR-based ligand screen of the purified protein then identifies possible substrates or products from a library of candidate compounds chosen from the putative pathway and other related pathways. These data are used to determine if the current sequence-based annotation is likely to be correct. For a number of case studies, additional experiments (such as in vivo genetic complementation were performed to determine function so that the reliability of the NMR screen could be independently assessed. Conclusions In all examples studied, the NMR screen was indicative of whether the functional annotation was correct. Thus, the case studies described demonstrate that NMR-based ligand screening is an effective and rapid tool for confirming or

  19. An evolutionary computation approach to examine functional brain plasticity

    Directory of Open Access Journals (Sweden)

    Arnab eRoy

    2016-04-01

    Full Text Available One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN and the executive control network (ECN during recovery from traumatic brain injury (TBI; the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in

  20. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images

    Energy Technology Data Exchange (ETDEWEB)

    De Backer, A.; Bos, K.H.W. van den [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Van den Broek, W. [AG Strukturforschung/Elektronenmikroskopie, Institut für Physik, Humboldt-Universität zu Berlin, Newtonstraße 15, 12489 Berlin (Germany); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Van Aert, S., E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2016-12-15

    An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. - Highlights: • An efficient model-based method for quantitative electron microscopy is introduced. • Images are modelled as a superposition of 2D Gaussian peaks. • Overlap between neighbouring columns is taken into account. • Structure parameters can be obtained with the highest precision and accuracy. • StatSTEM, auser friendly program (GNU public license) is developed.

  1. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    Science.gov (United States)

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. Copyright © 2016 the American Physiological Society.

  2. A Representation-Theoretic Approach to Reversible Computation with Applications

    DEFF Research Database (Denmark)

    Maniotis, Andreas Milton

    Reversible computing is a sub-discipline of computer science that helps to understand the foundations of the interplay between physics, algebra, and logic in the context of computation. Its subjects of study are computational devices and abstract models of computation that satisfy the constraint ......, there is still no uniform and consistent theory that is general in the sense of giving a model-independent account to the field....... of information conservation. Such machine models, which are known as reversible models of computation, have been examined both from a theoretical perspective and from an engineering perspective. While a bundle of many isolated successful findings and applications concerning reversible computing exists...

  3. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    Science.gov (United States)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  4. A systematic approach to the accurate quantification of selenium in serum selenoalbumin by HPLC-ICP-MS

    International Nuclear Information System (INIS)

    Jitaru, Petru; Goenaga-Infante, Heidi; Vaslin-Reimann, Sophie; Fisicaro, Paola

    2010-01-01

    In this paper, two different methods are for the first time systematically compared for the determination of selenium in human serum selenoalbumin (SeAlb). Firstly, SeAlb was enzymatically hydrolyzed and the resulting selenomethionine (SeMet) was quantified using species-specific isotope dilution (SSID) with reversed phase-HPLC (RP-HPLC) hyphenated to (collision/reaction cell) inductively coupled plasma-quadrupole mass spectrometry (CRC ICP-QMS). In order to assess the enzymatic hydrolysis yield, SeAlb was determined as an intact protein by affinity-HPLC (AF-HPLC) coupled to CRC ICP-QMS. Using this approach, glutathione peroxidase (GPx) and selenoprotein P (SelP) (the two selenoproteins present in serum) were also determined within the same chromatographic run. The levels of selenium associated with SeAlb in three serum materials, namely BCR-637, Seronorm level 1 and Seronorm level 2, obtained using both methods were in a good agreement. Verification of the absence of free SeMet, which interferes with the SeAlb determination (down to the amino acid level), in such materials was addressed by analyzing the fraction of GPx, partially purified by AF-HPLC, using RP-HPLC (GPx only) and size exclusion-HPLC (SE-HPLC) coupled to CRC ICP-QMS. The latter methodology was also used for the investigation of the presence of selenium species other than the selenoproteins in the (AF-HPLC) SelP and SeAlb fractions; the same selenium peaks were detected in both control and BCR-637 serum with a difference in age of ca. 12 years. It is also for the first time that the concentrations of selenium associated with SeAlb, GPx and SelP species in such commercially available serums (only certified or having indicative levels of total selenium content) are reported. Such indicative values can be used for reference purposes in future validation of speciation methods for selenium in human serum and/or inter-laboratory comparisons.

  5. A systematic approach to the accurate quantification of selenium in serum selenoalbumin by HPLC-ICP-MS

    Energy Technology Data Exchange (ETDEWEB)

    Jitaru, Petru, E-mail: Petru.Jitaru@lne.fr [Laboratoire National de Metrologie et d' Essais (LNE), Department of Biomedical and Inorganic Chemistry, 1 rue Gaston Boissier, 75015 Paris (France); Goenaga-Infante, Heidi [LGC Limited, Queens Road, Teddington, TW11 OLY, Middlesex (United Kingdom); Vaslin-Reimann, Sophie; Fisicaro, Paola [Laboratoire National de Metrologie et d' Essais (LNE), Department of Biomedical and Inorganic Chemistry, 1 rue Gaston Boissier, 75015 Paris (France)

    2010-01-11

    In this paper, two different methods are for the first time systematically compared for the determination of selenium in human serum selenoalbumin (SeAlb). Firstly, SeAlb was enzymatically hydrolyzed and the resulting selenomethionine (SeMet) was quantified using species-specific isotope dilution (SSID) with reversed phase-HPLC (RP-HPLC) hyphenated to (collision/reaction cell) inductively coupled plasma-quadrupole mass spectrometry (CRC ICP-QMS). In order to assess the enzymatic hydrolysis yield, SeAlb was determined as an intact protein by affinity-HPLC (AF-HPLC) coupled to CRC ICP-QMS. Using this approach, glutathione peroxidase (GPx) and selenoprotein P (SelP) (the two selenoproteins present in serum) were also determined within the same chromatographic run. The levels of selenium associated with SeAlb in three serum materials, namely BCR-637, Seronorm level 1 and Seronorm level 2, obtained using both methods were in a good agreement. Verification of the absence of free SeMet, which interferes with the SeAlb determination (down to the amino acid level), in such materials was addressed by analyzing the fraction of GPx, partially purified by AF-HPLC, using RP-HPLC (GPx only) and size exclusion-HPLC (SE-HPLC) coupled to CRC ICP-QMS. The latter methodology was also used for the investigation of the presence of selenium species other than the selenoproteins in the (AF-HPLC) SelP and SeAlb fractions; the same selenium peaks were detected in both control and BCR-637 serum with a difference in age of ca. 12 years. It is also for the first time that the concentrations of selenium associated with SeAlb, GPx and SelP species in such commercially available serums (only certified or having indicative levels of total selenium content) are reported. Such indicative values can be used for reference purposes in future validation of speciation methods for selenium in human serum and/or inter-laboratory comparisons.

  6. A computationally efficient approach for template matching-based ...

    Indian Academy of Sciences (India)

    In this paper, a new computationally efficient image registration method is ...... the proposed method requires less computational time as compared to traditional methods. ... Zitová B and Flusser J 2003 Image registration methods: A survey.

  7. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki; Park, Kihong; Alouini, Mohamed-Slim

    2017-01-01

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  8. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki

    2017-07-28

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  9. An Integrated Soft Computing Approach to Hughes Syndrome Risk Assessment.

    Science.gov (United States)

    Vilhena, João; Rosário Martins, M; Vicente, Henrique; Grañeda, José M; Caldeira, Filomena; Gusmão, Rodrigo; Neves, João; Neves, José

    2017-03-01

    The AntiPhospholipid Syndrome (APS) is an acquired autoimmune disorder induced by high levels of antiphospholipid antibodies that cause arterial and veins thrombosis, as well as pregnancy-related complications and morbidity, as clinical manifestations. This autoimmune hypercoagulable state, usually known as Hughes syndrome, has severe consequences for the patients, being one of the main causes of thrombotic disorders and death. Therefore, it is required to be preventive; being aware of how probable is to have that kind of syndrome. Despite the updated of antiphospholipid syndrome classification, the diagnosis remains difficult to establish. Additional research on clinically relevant antibodies and standardization of their quantification are required in order to improve the antiphospholipid syndrome risk assessment. Thus, this work will focus on the development of a diagnosis decision support system in terms of a formal agenda built on a Logic Programming approach to knowledge representation and reasoning, complemented with a computational framework based on Artificial Neural Networks. The proposed model allows for improving the diagnosis, classifying properly the patients that really presented this pathology (sensitivity higher than 85%), as well as classifying the absence of APS (specificity close to 95%).

  10. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A.

    2016-05-25

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.

  11. An Educational Approach to Computationally Modeling Dynamical Systems

    Science.gov (United States)

    Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl

    2009-01-01

    Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…

  12. Teaching Pervasive Computing to CS Freshmen: A Multidisciplinary Approach

    NARCIS (Netherlands)

    Silvis-Cividjian, Natalia

    2015-01-01

    Pervasive Computing is a growing area in research and commercial reality. Despite this extensive growth, there is no clear consensus on how and when to teach it to students. We report on an innovative attempt to teach this subject to first year Computer Science students. Our course combines computer

  13. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Energy Technology Data Exchange (ETDEWEB)

    Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)

    2007-10-15

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  14. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Science.gov (United States)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  15. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    International Nuclear Information System (INIS)

    Rybynok, V O; Kyriacou, P A

    2007-01-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  16. Computer face-matching technology using two-dimensional photographs accurately matches the facial gestalt of unrelated individuals with the same syndromic form of intellectual disability.

    Science.gov (United States)

    Dudding-Byth, Tracy; Baxter, Anne; Holliday, Elizabeth G; Hackett, Anna; O'Donnell, Sheridan; White, Susan M; Attia, John; Brunner, Han; de Vries, Bert; Koolen, David; Kleefstra, Tjitske; Ratwatte, Seshika; Riveros, Carlos; Brain, Steve; Lovell, Brian C

    2017-12-19

    Massively parallel genetic sequencing allows rapid testing of known intellectual disability (ID) genes. However, the discovery of novel syndromic ID genes requires molecular confirmation in at least a second or a cluster of individuals with an overlapping phenotype or similar facial gestalt. Using computer face-matching technology we report an automated approach to matching the faces of non-identical individuals with the same genetic syndrome within a database of 3681 images [1600 images of one of 10 genetic syndrome subgroups together with 2081 control images]. Using the leave-one-out method, two research questions were specified: 1) Using two-dimensional (2D) photographs of individuals with one of 10 genetic syndromes within a database of images, did the technology correctly identify more than expected by chance: i) a top match? ii) at least one match within the top five matches? or iii) at least one in the top 10 with an individual from the same syndrome subgroup? 2) Was there concordance between correct technology-based matches and whether two out of three clinical geneticists would have considered the diagnosis based on the image alone? The computer face-matching technology correctly identifies a top match, at least one correct match in the top five and at least one in the top 10 more than expected by chance (P syndromes except Kabuki syndrome. Although the accuracy of the computer face-matching technology was tested on images of individuals with known syndromic forms of intellectual disability, the results of this pilot study illustrate the potential utility of face-matching technology within deep phenotyping platforms to facilitate the interpretation of DNA sequencing data for individuals who remain undiagnosed despite testing the known developmental disorder genes.

  17. Human Computation An Integrated Approach to Learning from the Crowd

    CERN Document Server

    Law, Edith

    2011-01-01

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy

  18. A new approach in development of data flow control and investigation system for computer networks

    International Nuclear Information System (INIS)

    Frolov, I.; Vaguine, A.; Silin, A.

    1992-01-01

    This paper describes a new approach in development of data flow control and investigation system for computer networks. This approach was developed and applied in the Moscow Radiotechnical Institute for control and investigations of Institute computer network. It allowed us to solve our network current problems successfully. Description of our approach is represented below along with the most interesting results of our work. (author)

  19. A computational intelligence approach to the Mars Precision Landing problem

    Science.gov (United States)

    Birge, Brian Kent, III

    Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over

  20. A Soft Computing Approach to Kidney Diseases Evaluation.

    Science.gov (United States)

    Neves, José; Martins, M Rosário; Vilhena, João; Neves, João; Gomes, Sabino; Abelha, António; Machado, José; Vicente, Henrique

    2015-10-01

    Kidney renal failure means that one's kidney have unexpectedly stopped functioning, i.e., once chronic disease is exposed, the presence or degree of kidney dysfunction and its progression must be assessed, and the underlying syndrome has to be diagnosed. Although the patient's history and physical examination may denote good practice, some key information has to be obtained from valuation of the glomerular filtration rate, and the analysis of serum biomarkers. Indeed, chronic kidney sickness depicts anomalous kidney function and/or its makeup, i.e., there is evidence that treatment may avoid or delay its progression, either by reducing and prevent the development of some associated complications, namely hypertension, obesity, diabetes mellitus, and cardiovascular complications. Acute kidney injury appears abruptly, with a rapid deterioration of the renal function, but is often reversible if it is recognized early and treated promptly. In both situations, i.e., acute kidney injury and chronic kidney disease, an early intervention can significantly improve the prognosis. The assessment of these pathologies is therefore mandatory, although it is hard to do it with traditional methodologies and existing tools for problem solving. Hence, in this work, we will focus on the development of a hybrid decision support system, in terms of its knowledge representation and reasoning procedures based on Logic Programming, that will allow one to consider incomplete, unknown, and even contradictory information, complemented with an approach to computing centered on Artificial Neural Networks, in order to weigh the Degree-of-Confidence that one has on such a happening. The present study involved 558 patients with an age average of 51.7 years and the chronic kidney disease was observed in 175 cases. The dataset comprise twenty four variables, grouped into five main categories. The proposed model showed a good performance in the diagnosis of chronic kidney disease, since the

  1. Computer Forensics for Graduate Accountants: A Motivational Curriculum Design Approach

    OpenAIRE

    Grover Kearns

    2010-01-01

    Computer forensics involves the investigation of digital sources to acquire evidence that can be used in a court of law. It can also be used to identify and respond to threats to hosts and systems. Accountants use computer forensics to investigate computer crime or misuse, theft of trade secrets, theft of or destruction of intellectual property, and fraud. Education of accountants to use forensic tools is a goal of the AICPA (American Institute of Certified Public Accountants). Accounting stu...

  2. Role of Soft Computing Approaches in HealthCare Domain: A Mini Review.

    Science.gov (United States)

    Gambhir, Shalini; Malik, Sanjay Kumar; Kumar, Yugal

    2016-12-01

    In the present era, soft computing approaches play a vital role in solving the different kinds of problems and provide promising solutions. Due to popularity of soft computing approaches, these approaches have also been applied in healthcare data for effectively diagnosing the diseases and obtaining better results in comparison to traditional approaches. Soft computing approaches have the ability to adapt itself according to problem domain. Another aspect is a good balance between exploration and exploitation processes. These aspects make soft computing approaches more powerful, reliable and efficient. The above mentioned characteristics make the soft computing approaches more suitable and competent for health care data. The first objective of this review paper is to identify the various soft computing approaches which are used for diagnosing and predicting the diseases. Second objective is to identify various diseases for which these approaches are applied. Third objective is to categories the soft computing approaches for clinical support system. In literature, it is found that large number of soft computing approaches have been applied for effectively diagnosing and predicting the diseases from healthcare data. Some of these are particle swarm optimization, genetic algorithm, artificial neural network, support vector machine etc. A detailed discussion on these approaches are presented in literature section. This work summarizes various soft computing approaches used in healthcare domain in last one decade. These approaches are categorized in five different categories based on the methodology, these are classification model based system, expert system, fuzzy and neuro fuzzy system, rule based system and case based system. Lot of techniques are discussed in above mentioned categories and all discussed techniques are summarized in the form of tables also. This work also focuses on accuracy rate of soft computing technique and tabular information is provided for

  3. A general approach to the construction of 'very accurate' or 'definitive' methods by radiochemical NAA and the role of these methods in QA

    International Nuclear Information System (INIS)

    Dybczynski, R.

    1998-01-01

    Constant progress in instrumentation and methodology of inorganic trace analysis is not always paralleled by improvement in reliability of analytical results. Our approach to construction of 'very accurate' methods for the determination of selected trace elements in biological materials by RNAA is based on an assumption that: (i) The radionuclide in question should be selectively and quantitatively isolated from the irradiated sample by a suitable radiochemical scheme, optimized with respect to this particular radionuclide, yielding finally the analyte in the state of high radiochemical purity what assures interference-free measurement by gamma-ray spectrometry. (ii) The radiochemical scheme should be based on ion exchange and/or extraction column chromatography resulting in an easy automatic repetition of an elementary act of distribution of the analyte and accompanying radionuclides between stationary and mobile phases. (iii) The method should have some intrinsic mechanisms incorporated into the procedure preventing any possibility of making gross errors. Based on these general assumptions, several more specific rules for devising of 'very accurate' methods were formulated and applied when elaborating our methods for the determination of copper, cobalt, nickel, cadmium, molybdenum and uranium in biological materials. The significance of such methods for Quality Assurance is pointed out and illustrated by their use in the certification campaign of the new Polish biological CRMs based on tobacco

  4. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  5. Gesture Recognition by Computer Vision : An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  6. Thermodynamic and relative approach to compute glass-forming ...

    Indian Academy of Sciences (India)

    models) characteristic: the isobaric heat capacity (Cp) of oxides, and execute a mathematical treatment of oxides thermodynamic data. We note this coefficient as thermodynamical relative glass-forming ability (ThRGFA) and for- mulate a model to compute it. Computed values of 2nd, 3rd, 4th and 5th period metal oxides ...

  7. An approach to quantum-computational hydrologic inverse analysis.

    Science.gov (United States)

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.

  8. Reading Emotion From Mouse Cursor Motions: Affective Computing Approach.

    Science.gov (United States)

    Yamauchi, Takashi; Xiao, Kunchen

    2018-04-01

    Affective computing research has advanced emotion recognition systems using facial expressions, voices, gaits, and physiological signals, yet these methods are often impractical. This study integrates mouse cursor motion analysis into affective computing and investigates the idea that movements of the computer cursor can provide information about emotion of the computer user. We extracted 16-26 trajectory features during a choice-reaching task and examined the link between emotion and cursor motions. Participants were induced for positive or negative emotions by music, film clips, or emotional pictures, and they indicated their emotions with questionnaires. Our 10-fold cross-validation analysis shows that statistical models formed from "known" participants (training data) could predict nearly 10%-20% of the variance of positive affect and attentiveness ratings of "unknown" participants, suggesting that cursor movement patterns such as the area under curve and direction change help infer emotions of computer users. Copyright © 2017 Cognitive Science Society, Inc.

  9. Accurate quantification of endogenous androgenic steroids in cattle's meat by gas chromatography mass spectrometry using a surrogate analyte approach

    Energy Technology Data Exchange (ETDEWEB)

    Ahmadkhaniha, Reza; Shafiee, Abbas [Department of Medicinal Chemistry, Faculty of Pharmacy and Pharmaceutical Sciences Research Center, Tehran University of Medical Sciences, Tehran 14174 (Iran, Islamic Republic of); Rastkari, Noushin [Center for Environmental Research, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Kobarfard, Farzad [Department of Medicinal Chemistry, School of Pharmacy, Shaheed Beheshti University of Medical Sciences, Tavaneer Ave., Valieasr St., Tehran (Iran, Islamic Republic of)], E-mail: farzadkf@yahoo.com

    2009-01-05

    Determination of endogenous steroids in complex matrices such as cattle's meat is a challenging task. Since endogenous steroids always exist in animal tissues, no analyte-free matrices for constructing the standard calibration line will be available, which is crucial for accurate quantification specially at trace level. Although some methods have been proposed to solve the problem, none has offered a complete solution. To this aim, a new quantification strategy was developed in this study, which is named 'surrogate analyte approach' and is based on using isotope-labeled standards instead of natural form of endogenous steroids for preparing the calibration line. In comparison with the other methods, which are currently in use for the quantitation of endogenous steroids, this approach provides improved simplicity and speed for analysis on a routine basis. The accuracy of this method is better than other methods at low concentration and comparable to the standard addition at medium and high concentrations. The method was also found to be valid according to the ICH criteria for bioanalytical methods. The developed method could be a promising approach in the field of compounds residue analysis.

  10. Human-Computer Interfaces for Wearable Computers: A Systematic Approach to Development and Evaluation

    OpenAIRE

    Witt, Hendrik

    2007-01-01

    The research presented in this thesis examines user interfaces for wearable computers.Wearable computers are a special kind of mobile computers that can be worn on the body. Furthermore, they integrate themselves even more seamlessly into different activities than a mobile phone or a personal digital assistant can.The thesis investigates the development and evaluation of user interfaces for wearable computers. In particular, it presents fundamental research results as well as supporting softw...

  11. What Computational Approaches Should be Taught for Physics?

    Science.gov (United States)

    Landau, Rubin

    2005-03-01

    The standard Computational Physics courses are designed for upper-level physics majors who already have some computational skills. We believe that it is important for first-year physics students to learn modern computing techniques that will be useful throughout their college careers, even before they have learned the math and science required for Computational Physics. To teach such Introductory Scientific Computing courses requires that some choices be made as to what subjects and computer languages wil be taught. Our survey of colleagues active in Computational Physics and Physics Education show no predominant choice, with strong positions taken for the compiled languages Java, C, C++ and Fortran90, as well as for problem-solving environments like Maple and Mathematica. Over the last seven years we have developed an Introductory course and have written up those courses as text books for others to use. We will describe our model of using both a problem-solving environment and a compiled language. The developed materials are available in both Maple and Mathaematica, and Java and Fortran90ootnotetextPrinceton University Press, to be published; www.physics.orst.edu/˜rubin/IntroBook/.

  12. Computer Tutors: An Innovative Approach to Computer Literacy. Part I: The Early Stages.

    Science.gov (United States)

    Targ, Joan

    1981-01-01

    In Part I of this two-part article, the author describes the evolution of the Computer Tutor project in Palo Alto, California, and the strategies she incorporated into a successful student-taught computer literacy program. Journal availability: Educational Computer, P.O. Box 535, Cupertino, CA 95015. (Editor/SJL)

  13. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  14. A Human-Centred Tangible approach to learning Computational Thinking

    Directory of Open Access Journals (Sweden)

    Tommaso Turchi

    2016-08-01

    Full Text Available Computational Thinking has recently become a focus of many teaching and research domains; it encapsulates those thinking skills integral to solving complex problems using a computer, thus being widely applicable in our society. It is influencing research across many disciplines and also coming into the limelight of education, mostly thanks to public initiatives such as the Hour of Code. In this paper we present our arguments for promoting Computational Thinking in education through the Human-centred paradigm of Tangible End-User Development, namely by exploiting objects whose interactions with the physical environment are mapped to digital actions performed on the system.

  15. An introduction to statistical computing a simulation-based approach

    CERN Document Server

    Voss, Jochen

    2014-01-01

    A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems.  Sampling-based simulation techniques are now an invaluable tool for exploring statistical models.  This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods.  It also includes some advanced met

  16. Towards an Approach of Semantic Access Control for Cloud Computing

    Science.gov (United States)

    Hu, Luokai; Ying, Shi; Jia, Xiangyang; Zhao, Kai

    With the development of cloud computing, the mutual understandability among distributed Access Control Policies (ACPs) has become an important issue in the security field of cloud computing. Semantic Web technology provides the solution to semantic interoperability of heterogeneous applications. In this paper, we analysis existing access control methods and present a new Semantic Access Control Policy Language (SACPL) for describing ACPs in cloud computing environment. Access Control Oriented Ontology System (ACOOS) is designed as the semantic basis of SACPL. Ontology-based SACPL language can effectively solve the interoperability issue of distributed ACPs. This study enriches the research that the semantic web technology is applied in the field of security, and provides a new way of thinking of access control in cloud computing.

  17. Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach

    KAUST Repository

    Collier, Nathan; Radwan, Hany; Dalcin, Lisandro; Calo, Victor M.

    2011-01-01

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity

  18. Efficient Discovery of Novel Multicomponent Mixtures for Hydrogen Storage: A Combined Computational/Experimental Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wolverton, Christopher [Northwestern Univ., Evanston, IL (United States). Dept. of Materials Science and Engineering; Ozolins, Vidvuds [Univ. of California, Los Angeles, CA (United States). Dept. of Materials Science and Engineering; Kung, Harold H. [Northwestern Univ., Evanston, IL (United States). Dept. of Chemical and Biological Engineering; Yang, Jun [Ford Scientific Research Lab., Dearborn, MI (United States); Hwang, Sonjong [California Inst. of Technology (CalTech), Pasadena, CA (United States). Dept. of Chemistry and Chemical Engineering; Shore, Sheldon [The Ohio State Univ., Columbus, OH (United States). Dept. of Chemistry and Biochemistry

    2016-11-28

    The objective of the proposed program is to discover novel mixed hydrides for hydrogen storage, which enable the DOE 2010 system-level goals. Our goal is to find a material that desorbs 8.5 wt.% H2 or more at temperatures below 85°C. The research program will combine first-principles calculations of reaction thermodynamics and kinetics with material and catalyst synthesis, testing, and characterization. We will combine materials from distinct categories (e.g., chemical and complex hydrides) to form novel multicomponent reactions. Systems to be studied include mixtures of complex hydrides and chemical hydrides [e.g. LiNH2+NH3BH3] and nitrogen-hydrogen based borohydrides [e.g. Al(BH4)3(NH3)3]. The 2010 and 2015 FreedomCAR/DOE targets for hydrogen storage systems are very challenging, and cannot be met with existing materials. The vast majority of the work to date has delineated materials into various classes, e.g., complex and metal hydrides, chemical hydrides, and sorbents. However, very recent studies indicate that mixtures of storage materials, particularly mixtures between various classes, hold promise to achieve technological attributes that materials within an individual class cannot reach. Our project involves a systematic, rational approach to designing novel multicomponent mixtures of materials with fast hydrogenation/dehydrogenation kinetics and favorable thermodynamics using a combination of state-of-the-art scientific computing and experimentation. We will use the accurate predictive power of first-principles modeling to understand the thermodynamic and microscopic kinetic processes involved in hydrogen release and uptake and to design new material/catalyst systems with improved properties. Detailed characterization and atomic-scale catalysis experiments will elucidate the effect of dopants and nanoscale catalysts in achieving fast kinetics and reversibility. And

  19. Development of Computer Science Disciplines - A Social Network Analysis Approach

    OpenAIRE

    Pham, Manh Cuong; Klamma, Ralf; Jarke, Matthias

    2011-01-01

    In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and ...

  20. New Approaches to Quantum Computing using Nuclear Magnetic Resonance Spectroscopy

    International Nuclear Information System (INIS)

    Colvin, M; Krishnan, V V

    2003-01-01

    The power of a quantum computer (QC) relies on the fundamental concept of the superposition in quantum mechanics and thus allowing an inherent large-scale parallelization of computation. In a QC, binary information embodied in a quantum system, such as spin degrees of freedom of a spin-1/2 particle forms the qubits (quantum mechanical bits), over which appropriate logical gates perform the computation. In classical computers, the basic unit of information is the bit, which can take a value of either 0 or 1. Bits are connected together by logic gates to form logic circuits to implement complex logical operations. The expansion of modern computers has been driven by the developments of faster, smaller and cheaper logic gates. As the size of the logic gates become smaller toward the level of atomic dimensions, the performance of such a system is no longer considered classical but is rather governed by quantum mechanics. Quantum computers offer the potentially superior prospect of solving computational problems that are intractable to classical computers such as efficient database searches and cryptography. A variety of algorithms have been developed recently, most notably Shor's algorithm for factorizing long numbers into prime factors in polynomial time and Grover's quantum search algorithm. The algorithms that were of only theoretical interest as recently, until several methods were proposed to build an experimental QC. These methods include, trapped ions, cavity-QED, coupled quantum dots, Josephson junctions, spin resonance transistors, linear optics and nuclear magnetic resonance. Nuclear magnetic resonance (NMR) is uniquely capable of constructing small QCs and several algorithms have been implemented successfully. NMR-QC differs from other implementations in one important way that it is not a single QC, but a statistical ensemble of them. Thus, quantum computing based on NMR is considered as ensemble quantum computing. In NMR quantum computing, the spins with

  1. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  2. Turning education into action: Impact of a collective social education approach to improve nurses' ability to recognize and accurately assess delirium in hospitalized older patients.

    Science.gov (United States)

    Travers, Catherine; Henderson, Amanda; Graham, Fred; Beattie, Elizabeth

    2018-03-01

    Although cognitive impairment including dementia and delirium is common in older hospital patients, it is not well recognized or managed by hospital staff, potentially resulting in adverse events. This paper describes, and reports on the impact of a collective social education approach to improving both nurses' knowledge of, and screening for delirium. Thirty-four experienced nurses from six hospital wards, became Cognition Champions (CogChamps) to lead their wards in a collective social education process about cognitive impairment and the assessment of delirium. At the outset, the CogChamps were provided with comprehensive education about dementia and delirium from a multidisciplinary team of clinicians. Their knowledge was assessed to ascertain they had the requisite understanding to engage in education as a collective social process, namely, with each other and their local teams. Following this, they developed ward specific Action Plans in collaboration with their teams aimed at educating and evaluating ward nurses' ability to accurately assess and care for patients for delirium. The plans were implemented over five months. The broader nursing teams' knowledge was assessed, together with their ability to accurately assess patients for delirium. Each ward implemented their Action Plan to varying degrees and key achievements included the education of a majority of ward nurses about delirium and the certification of the majority as competent to assess patients for delirium using the Confusion Assessment Method. Two wards collected pre-and post-audit data that demonstrated a substantial improvement in delirium screening rates. The education process led by CogChamps and supported by educators and clinical experts provides an example of successfully educating nurses about delirium and improving screening rates of patients for delirium. ACTRN 12617000563369. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Computer aided approach for qualitative risk assessment of engineered systems

    International Nuclear Information System (INIS)

    Crowley, W.K.; Arendt, J.S.; Fussell, J.B.; Rooney, J.J.; Wagner, D.P.

    1978-01-01

    This paper outlines a computer aided methodology for determining the relative contributions of various subsystems and components to the total risk associated with an engineered system. Major contributors to overall task risk are identified through comparison of an expected frequency density function with an established risk criterion. Contributions that are inconsistently high are also identified. The results from this analysis are useful for directing efforts for improving system safety and performance. An analysis of uranium hexafluoride handling risk at a gaseous diffusion uranium enrichment plant using a preliminary version of the computer program EXCON is briefly described and illustrated

  4. Environmental sciences and computations: a modular data based systems approach

    International Nuclear Information System (INIS)

    Crawford, T.V.; Bailey, C.E.

    1975-07-01

    A major computer code for environmental calculations is under development at the Savannah River Laboratory. The primary aim is to develop a flexible, efficient capability to calculate, for all significant pathways, the dose to man resulting from releases of radionuclides from the Savannah River Plant and from other existing and potential radioactive sources in the southeastern United States. The environmental sciences programs at SRP are described, with emphasis on the development of the calculational system. It is being developed as a modular data-based system within the framework of the larger JOSHUA Computer System, which provides data management, terminal, and job execution facilities. (U.S.)

  5. Computer assisted pyeloplasty in children the retroperitoneal approach

    DEFF Research Database (Denmark)

    Olsen, L H; Jorgensen, T M

    2004-01-01

    PURPOSE: We describe the first series of computer assisted retroperitoneoscopic pyeloplasty in children using the Da Vinci Surgical System (Intuitive Surgical, Inc., Mountainview, California) with regard to setup, method, operation time, complications and preliminary outcome. The small space...... with the Da Vinci Surgical System. With the patient in a lateral semiprone position the retroperitoneal space was developed by blunt and balloon dissection. Three ports were placed for the computer assisted system and 1 for assistance. Pyeloplasty was performed with the mounted system placed behind...

  6. Thermodynamic and relative approach to compute glass-forming

    Indian Academy of Sciences (India)

    This study deals with the evaluation of glass-forming ability (GFA) of oxides and is a critical reading of Sun and Rawson thermodynamic approach to quantify this aptitude. Both approaches are adequate but ambiguous regarding the behaviour of some oxides (tendency to amorphization or crystallization). Indeed, ZrO2 and ...

  7. A Computational Approach to Quantifiers as an Explanation for Some Language Impairments in Schizophrenia

    Science.gov (United States)

    Zajenkowski, Marcin; Styla, Rafal; Szymanik, Jakub

    2011-01-01

    We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only…

  8. Computer Adaptive Testing, Big Data and Algorithmic Approaches to Education

    Science.gov (United States)

    Thompson, Greg

    2017-01-01

    This article critically considers the promise of computer adaptive testing (CAT) and digital data to provide better and quicker data that will improve the quality, efficiency and effectiveness of schooling. In particular, it uses the case of the Australian NAPLAN test that will become an online, adaptive test from 2016. The article argues that…

  9. A Cellular Automata Approach to Computer Vision and Image Processing.

    Science.gov (United States)

    1980-09-01

    the ACM, vol. 15, no. 9, pp. 827-837. [ Duda and Hart] R. 0. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley, New York, 1973...Center TR-738, 1979. [Farley] Arthur M. Farley and Andrzej Proskurowski, "Gossiping in Grid Graphs", University of Oregon Computer Science Department CS-TR

  10. New approach for virtual machines consolidation in heterogeneous computing systems

    Czech Academy of Sciences Publication Activity Database

    Fesl, Jan; Cehák, J.; Doležalová, Marie; Janeček, J.

    2016-01-01

    Roč. 9, č. 12 (2016), s. 321-332 ISSN 1738-9968 Institutional support: RVO:60077344 Keywords : consolidation * virtual machine * distributed Subject RIV: JD - Computer Applications, Robotics http://www.sersc.org/journals/IJHIT/vol9_no12_2016/29.pdf

  11. Simulation of quantum computation : A deterministic event-based approach

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, K; De Raedt, H

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  12. Simulation of Quantum Computation : A Deterministic Event-Based Approach

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, K. De; Raedt, H. De

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  13. Computational approaches to cognition: the bottom-up view.

    Science.gov (United States)

    Koch, C

    1993-04-01

    How can higher level aspects of cognition, such as figure-ground segregation, object recognition, selective focal attention and ultimately even awareness, be implemented at the level of synapses and neurons? A number of theoretical studies emerging out of the connectionist and the computational neuroscience communities are starting to address these issues using neural plausible models.

  14. Linguistics, Computers, and the Language Teacher. A Communicative Approach.

    Science.gov (United States)

    Underwood, John H.

    This analysis of the state of the art of computer programs and programming for language teaching has two parts. In the first part, an overview of the theory and practice of language teaching, Noam Chomsky's view of language, and the implications and problems of generative theory are presented. The theory behind the input model of language…

  15. Integration of case study approach, project design and computer ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... computer modeling used as a research method applied in the process ... conclusions discuss the benefits for students who analyzed the ... accounting education process the case study method should not .... providing travel safety information to passengers ... from literature readings with practical problems.

  16. R for cloud computing an approach for data scientists

    CERN Document Server

    Ohri, A

    2014-01-01

    R for Cloud Computing looks at some of the tasks performed by business analysts on the desktop (PC era)  and helps the user navigate the wealth of information in R and its 4000 packages as well as transition the same analytics using the cloud.  With this information the reader can select both cloud vendors  and the sometimes confusing cloud ecosystem as well  as the R packages that can help process the analytical tasks with minimum effort and cost, and maximum usefulness and customization. The use of Graphical User Interfaces (GUI)  and Step by Step screenshot tutorials is emphasized in this book to lessen the famous learning curve in learning R and some of the needless confusion created in cloud computing that hinders its widespread adoption. This will help you kick-start analytics on the cloud including chapters on cloud computing, R, common tasks performed in analytics, scrutiny of big data analytics, and setting up and navigating cloud providers. Readers are exposed to a breadth of cloud computing ch...

  17. A "Service-Learning Approach" to Teaching Computer Graphics

    Science.gov (United States)

    Hutzel, Karen

    2007-01-01

    The author taught a computer graphics course through a service-learning framework to undergraduate and graduate students in the spring of 2003 at Florida State University (FSU). The students in this course participated in learning a software program along with youths from a neighboring, low-income, primarily African-American community. Together,…

  18. Numerical methodologies for investigation of moderate-velocity flow using a hybrid computational fluid dynamics - molecular dynamics simulation approach

    International Nuclear Information System (INIS)

    Ko, Soon Heum; Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel; Jha, Shantenu

    2014-01-01

    Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.

  19. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  20. Results of computer assisted mini-incision subvastus approach for total knee arthroplasty.

    Science.gov (United States)

    Turajane, Thana; Larbpaiboonpong, Viroj; Kongtharvonskul, Jatupon; Maungsiri, Samart

    2009-12-01

    groups), and Knee society score preoperative and postoperative [64.6 (59.8-69.4) and 93.7 (90.8-96.65)]: 69 (63.6-74.39) 92.36 (88.22-96.5)]. The complications found in both groups were similar. No deep vein thrombosis, no fracture at both femur and tibia, no vascular injury, and no pin tract pain or infection was found in both groups. The computer assisted CMS-TKA) is one of the appropriate procedures for all varus deformity, no limitation with the associated bone loss, flexion contractor, BMI, except the fixed valgus deformity. To ensure the clinical outcomes, multiple key steps were considered as the appropriate techniques for this approach which included the accurate registration, precision bone cut and ligament balances, and the good cement techniques.

  1. A unified momentum equation approach for computing thermal residual stresses during melting and solidification

    Science.gov (United States)

    Yeo, Haram; Ki, Hyungson

    2018-03-01

    In this article, we present a novel numerical method for computing thermal residual stresses from a viewpoint of fluid-structure interaction (FSI). In a thermal processing of a material, residual stresses are developed as the material undergoes melting and solidification, and liquid, solid, and a mixture of liquid and solid (or mushy state) coexist and interact with each other during the process. In order to accurately account for the stress development during phase changes, we derived a unified momentum equation from the momentum equations of incompressible fluids and elastoplastic solids. In this approach, the whole fluid-structure system is treated as a single continuum, and the interaction between fluid and solid phases across the mushy zone is naturally taken into account in a monolithic way. For thermal analysis, an enthalpy-based method was employed. As a numerical example, a two-dimensional laser heating problem was considered, where a carbon steel sheet was heated by a Gaussian laser beam. Momentum and energy equations were discretized on a uniform Cartesian grid in a finite volume framework, and temperature-dependent material properties were used. The austenite-martensite phase transformation of carbon steel was also considered. In this study, the effects of solid strains, fluid flow, mushy zone size, and laser heating time on residual stress formation were investigated.

  2. Computing pKa Values with a Mixing Hamiltonian Quantum Mechanical/Molecular Mechanical Approach.

    Science.gov (United States)

    Liu, Yang; Fan, Xiaoli; Jin, Yingdi; Hu, Xiangqian; Hu, Hao

    2013-09-10

    Accurate computation of the pKa value of a compound in solution is important but challenging. Here, a new mixing quantum mechanical/molecular mechanical (QM/MM) Hamiltonian method is developed to simulate the free-energy change associated with the protonation/deprotonation processes in solution. The mixing Hamiltonian method is designed for efficient quantum mechanical free-energy simulations by alchemically varying the nuclear potential, i.e., the nuclear charge of the transforming nucleus. In pKa calculation, the charge on the proton is varied in fraction between 0 and 1, corresponding to the fully deprotonated and protonated states, respectively. Inspired by the mixing potential QM/MM free energy simulation method developed previously [H. Hu and W. T. Yang, J. Chem. Phys. 2005, 123, 041102], this method succeeds many advantages of a large class of λ-coupled free-energy simulation methods and the linear combination of atomic potential approach. Theory and technique details of this method, along with the calculation results of the pKa of methanol and methanethiol molecules in aqueous solution, are reported. The results show satisfactory agreement with the experimental data.

  3. A Discrete Approach to Computer-Oriented Calculus.

    Science.gov (United States)

    Gordon, Sheldon P.

    1979-01-01

    Some of the implications and advantages of an instructional approach using results from the calculus of finite differences and finite sums, both for motivation and as tools leading to applications, are discussed. (MP)

  4. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  5. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  6. Software approach to automatic patching of analog computer

    Science.gov (United States)

    1973-01-01

    The Automatic Patching Verification program (APV) is described which provides the hybrid computer programmer with a convenient method of performing a static check of the analog portion of his study. The static check insures that the program is patched as specified, and that the computing components being used are operating correctly. The APV language the programmer uses to specify his conditions and interconnections is similar to the FORTRAN language in syntax. The APV control program reads APV source program statements from an assigned input device. Each source program statement is processed immediately after it is read. A statement may select an analog console, set an analog mode, set a potentiometer or DAC, or read from the analog console and perform a test. Statements are read and processed sequentially. If an error condition is detected, an output occurs on an assigned output device. When an end statement is read, the test is terminated.

  7. New Computational Approach to Electron Transport in Irregular Graphene Nanostructures

    Science.gov (United States)

    Mason, Douglas; Heller, Eric; Prendergast, David; Neaton, Jeffrey

    2009-03-01

    For novel graphene devices of nanoscale-to-macroscopic scale, many aspects of their transport properties are not easily understood due to difficulties in fabricating devices with regular edges. Here we develop a framework to efficiently calculate and potentially screen electronic transport properties of arbitrary nanoscale graphene device structures. A generalization of the established recursive Green's function method is presented, providing access to arbitrary device and lead geometries with substantial computer-time savings. Using single-orbital nearest-neighbor tight-binding models and the Green's function-Landauer scattering formalism, we will explore the transmission function of irregular two-dimensional graphene-based nanostructures with arbitrary lead orientation. Prepared by LBNL under contract DE-AC02-05CH11231 and supported by the U.S. Dept. of Energy Computer Science Graduate Fellowship under grant DE-FG02-97ER25308.

  8. A Neural Information Field Approach to Computational Cognition

    Science.gov (United States)

    2016-11-18

    effects of distraction during list memory . These distractions include short and long delays before recall, and continuous distraction (forced rehearsal... memory encoding and replay in hippocampus. Computational Neuroscience Society (CNS), p. 166, 2014. D. A. Pinotsis, Neural Field Coding of Short Term ...performance of children learning to count in a SPA model; proposed a new SPA model of cognitive load using the N-back task; developed a new model of the

  9. A Novel Biometric Approach for Authentication In Pervasive Computing Environments

    OpenAIRE

    Rachappa,; Divyajyothi M G; D H Rao

    2016-01-01

    The paradigm of embedding computing devices in our surrounding environment has gained more interest in recent days. Along with contemporary technology comes challenges, the most important being the security and privacy aspect. Keeping the aspect of compactness and memory constraints of pervasive devices in mind, the biometric techniques proposed for identification should be robust and dynamic. In this work, we propose an emerging scheme that is based on few exclusive human traits and characte...

  10. Modeling Cu{sup 2+}-Aβ complexes from computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Alí-Torres, Jorge [Departamento de Química, Universidad Nacional de Colombia- Sede Bogotá, 111321 (Colombia); Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona, E-mail: Mariona.Sodupe@uab.cat [Departament de Química, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)

    2015-09-15

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu{sup 2+} metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu{sup 2+}-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu{sup 2+}-Aβ coordination and build plausible Cu{sup 2+}-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  11. Identifying differences in brain activities and an accurate detection of autism spectrum disorder using resting state functional-magnetic resonance imaging : A spatial filtering approach.

    Science.gov (United States)

    Subbaraju, Vigneshwaran; Suresh, Mahanand Belathur; Sundaram, Suresh; Narasimhan, Sundararajan

    2017-01-01

    This paper presents a new approach for detecting major differences in brain activities between Autism Spectrum Disorder (ASD) patients and neurotypical subjects using the resting state fMRI. Further the method also extracts discriminative features for an accurate diagnosis of ASD. The proposed approach determines a spatial filter that projects the covariance matrices of the Blood Oxygen Level Dependent (BOLD) time-series signals from both the ASD patients and neurotypical subjects in orthogonal directions such that they are highly separable. The inverse of this filter also provides a spatial pattern map within the brain that highlights those regions responsible for the distinguishable activities between the ASD patients and neurotypical subjects. For a better classification, highly discriminative log-variance features providing the maximum separation between the two classes are extracted from the projected BOLD time-series data. A detailed study has been carried out using the publicly available data from the Autism Brain Imaging Data Exchange (ABIDE) consortium for the different gender and age-groups. The study results indicate that for all the above categories, the regional differences in resting state activities are more commonly found in the right hemisphere compared to the left hemisphere of the brain. Among males, a clear shift in activities to the prefrontal cortex is observed for ASD patients while other parts of the brain show diminished activities compared to neurotypical subjects. Among females, such a clear shift is not evident; however, several regions, especially in the posterior and medial portions of the brain show diminished activities due to ASD. Finally, the classification performance obtained using the log-variance features is found to be better when compared to earlier studies in the literature. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Computer-assisted modeling: Contributions of computational approaches to elucidating macromolecular structure and function: Final report

    International Nuclear Information System (INIS)

    Walton, S.

    1987-01-01

    The Committee, asked to provide an assessment of computer-assisted modeling of molecular structure, has highlighted the signal successes and the significant limitations for a broad panoply of technologies and has projected plausible paths of development over the next decade. As with any assessment of such scope, differing opinions about present or future prospects were expressed. The conclusions and recommendations, however, represent a consensus of our views of the present status of computational efforts in this field

  13. Changes to a modelling approach with the use of computer

    DEFF Research Database (Denmark)

    Andresen, Mette

    2006-01-01

    of teaching materials on differential equations. One of the objectives of the project was changes at two levels: 1) Changes at curriculum level and 2) Changes in the intentions of modelling and using models. The paper relates the changes at these two levels and discusses how the use of computer can serve......This paper reports on a Ph.D. project, which was part of a larger research- and development project (see www.matnatverdensklasse.dk). In the reported part of the project, each student had had a laptop at his disposal for at least two years. The Ph.D. project inquires the try out in four classes...

  14. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  15. Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach

    KAUST Repository

    Collier, Nathan

    2011-05-14

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.

  16. Sinc-Approximations of Fractional Operators: A Computing Approach

    Directory of Open Access Journals (Sweden)

    Gerd Baumann

    2015-06-01

    Full Text Available We discuss a new approach to represent fractional operators by Sinc approximation using convolution integrals. A spin off of the convolution representation is an effective inverse Laplace transform. Several examples demonstrate the application of the method to different practical problems.

  17. Mechanisms of Neurofeedback: A Computation-theoretic Approach.

    Science.gov (United States)

    Davelaar, Eddy J

    2018-05-15

    Neurofeedback training is a form of brain training in which information about a neural measure is fed back to the trainee who is instructed to increase or decrease the value of that particular measure. This paper focuses on electroencephalography (EEG) neurofeedback in which the neural measures of interest are the brain oscillations. To date, the neural mechanisms that underlie successful neurofeedback training are still unexplained. Such an understanding would benefit researchers, funding agencies, clinicians, regulatory bodies, and insurance firms. Based on recent empirical work, an emerging theory couched firmly within computational neuroscience is proposed that advocates a critical role of the striatum in modulating EEG frequencies. The theory is implemented as a computer simulation of peak alpha upregulation, but in principle any frequency band at one or more electrode sites could be addressed. The simulation successfully learns to increase its peak alpha frequency and demonstrates the influence of threshold setting - the threshold that determines whether positive or negative feedback is provided. Analyses of the model suggest that neurofeedback can be likened to a search process that uses importance sampling to estimate the posterior probability distribution over striatal representational space, with each representation being associated with a distribution of values of the target EEG band. The model provides an important proof of concept to address pertinent methodological questions about how to understand and improve EEG neurofeedback success. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. A uniform approach for programming distributed heterogeneous computing systems.

    Science.gov (United States)

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  19. Novel approach for dam break flow modeling using computational intelligence

    Science.gov (United States)

    Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar

    2018-04-01

    A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.

  20. A 3D computer graphics approach to brachytherapy planning.

    Science.gov (United States)

    Weichert, Frank; Wawro, Martin; Wilke, Carsten

    2004-06-01

    Intravascular brachytherapy (IVB) can significantly reduce the risk of restenosis after interventional treatment of stenotic arteries, if planned and applied correctly. In order to facilitate computer-based IVB planning, a three-dimensional reconstruction of the stenotic artery based on intravascular ultrasound (IVUS) sequences is desirable. For this purpose, the frames of the IVUS sequence are properly aligned in space, possible gaps inbetween the IVUS frames are filled by interpolation with radial basis functions known from scattered data interpolation. The alignment procedure uses additional information which is obtained from biplane X-ray angiography performed simultaneously during the capturing of the IVUS sequence. After IVUS images and biplane angiography data are acquired from the patient, the vessel-wall borders and the IVUS catheter are detected by an active contour algorithm. Next, the twist (relative orientation) between adjacent IVUS frames is determined by a sequential triangulation method. The absolute orientation of each frame is established by a stochastic analysis based on anatomical landmarks. Finally, the reconstructed 3D vessel model is visualized by methods of combined volume and polygon rendering. The reconstruction is then used for the computation of the radiation-distribution within the tissue, emitted from a beta-radiation source. All these steps are performed during the percutaneous intervention.

  1. A Computational Approach to the Quantification of Animal Camouflage

    Science.gov (United States)

    2014-06-01

    and Norm Farr, for providing great feedback on my research and encouragement along the way. Finally, I thank my dad and my sister, for their love...that live different habitats. Another approach, albeit logistically difficult, would be to transport cuttlefish native to a chromatically poor ...habitat to a chromatically rich habitat. Many such challenges remain in the field of sensory ecology, not just of cephalopods in marine habitats but many

  2. Engineering approach to model and compute electric power markets settlements

    International Nuclear Information System (INIS)

    Kumar, J.; Petrov, V.

    2006-01-01

    Back-office accounting settlement activities are an important part of market operations in Independent System Operator (ISO) organizations. A potential way to measure ISO market design correctness is to analyze how well market price signals create incentives or penalties for creating an efficient market to achieve market design goals. Market settlement rules are an important tool for implementing price signals which are fed back to participants via the settlement activities of the ISO. ISO's are currently faced with the challenge of high volumes of data resulting from the increasing size of markets and ever-changing market designs, as well as the growing complexity of wholesale energy settlement business rules. This paper analyzed the problem and presented a practical engineering solution using an approach based on mathematical formulation and modeling of large scale calculations. The paper also presented critical comments on various differences in settlement design approaches to electrical power market design, as well as further areas of development. The paper provided a brief introduction to the wholesale energy market settlement systems and discussed problem formulation. An actual settlement implementation framework and discussion of the results and conclusions were also presented. It was concluded that a proper engineering approach to this domain can yield satisfying results by formalizing wholesale energy settlements. Significant improvements were observed in the initial preparation phase, scoping and effort estimation, implementation and testing. 5 refs., 2 figs

  3. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    International Nuclear Information System (INIS)

    Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando

    2015-01-01

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke’s law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers

  4. Computational approaches to identify functional genetic variants in cancer genomes

    DEFF Research Database (Denmark)

    Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris

    2013-01-01

    The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor but only a minority of these drive tumor progression. We present the result of discu......The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor but only a minority of these drive tumor progression. We present the result...... of discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype....

  5. Computer aided fixture design - A case based approach

    Science.gov (United States)

    Tanji, Shekhar; Raiker, Saiesh; Mathew, Arun Tom

    2017-11-01

    Automated fixture design plays important role in process planning and integration of CAD and CAM. An automated fixture setup design system is developed where when fixturing surfaces and points are described allowing modular fixture components to get automatically select for generating fixture units and placed into position with satisfying assembled conditions. In past, various knowledge based system have been developed to implement CAFD in practice. In this paper, to obtain an acceptable automated machining fixture design, a case-based reasoning method with developed retrieval system is proposed. Visual Basic (VB) programming language is used in integrating with SolidWorks API (Application programming interface) module for better retrieval procedure reducing computational time. These properties are incorporated in numerical simulation to determine the best fit for practical use.

  6. Predicting Transport of 3,5,6-Trichloro-2-Pyridinol Into Saliva Using a Combination Experimental and Computational Approach

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Jordan Ned; Carver, Zana A.; Weber, Thomas J.; Timchalk, Charles

    2017-04-11

    A combination experimental and computational approach was developed to predict chemical transport into saliva. A serous-acinar chemical transport assay was established to measure chemical transport with non-physiological (standard cell culture medium) and physiological (using surrogate plasma and saliva medium) conditions using 3,5,6-trichloro-2-pyridinol (TCPy) a metabolite of the pesticide chlorpyrifos. High levels of TCPy protein binding was observed in cell culture medium and rat plasma resulting in different TCPy transport behaviors in the two experimental conditions. In the non-physiological transport experiment, TCPy reached equilibrium at equivalent concentrations in apical and basolateral chambers. At higher TCPy doses, increased unbound TCPy was observed, and TCPy concentrations in apical and basolateral chambers reached equilibrium faster than lower doses, suggesting only unbound TCPy is able to cross the cellular monolayer. In the physiological experiment, TCPy transport was slower than non-physiological conditions, and equilibrium was achieved at different concentrations in apical and basolateral chambers at a comparable ratio (0.034) to what was previously measured in rats dosed with TCPy (saliva:blood ratio: 0.049). A cellular transport computational model was developed based on TCPy protein binding kinetics and accurately simulated all transport experiments using different permeability coefficients for the two experimental conditions (1.4 vs 0.4 cm/hr for non-physiological and physiological experiments, respectively). The computational model was integrated into a physiologically based pharmacokinetic (PBPK) model and accurately predicted TCPy concentrations in saliva of rats dosed with TCPy. Overall, this study demonstrates an approach to predict chemical transport in saliva potentially increasing the utility of salivary biomonitoring in the future.

  7. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    Science.gov (United States)

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  8. Information and psychomotor skills knowledge acquisition: A student-customer-centered and computer-supported approach.

    Science.gov (United States)

    Nicholson, Anita; Tobin, Mary

    2006-01-01

    This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.

  9. A dynamical-systems approach for computing ice-affected streamflow

    Science.gov (United States)

    Holtschlag, David J.

    1996-01-01

    A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.

  10. Computer Series, 98. Electronics for Scientists: A Computer-Intensive Approach.

    Science.gov (United States)

    Scheeline, Alexander; Mork, Brian J.

    1988-01-01

    Reports the design for a principles-before-details presentation of electronics for an instrumental analysis class. Uses computers for data collection and simulations. Requires one semester with two 2.5-hour periods and two lectures per week. Includes lab and lecture syllabi. (MVL)

  11. Perturbation approach for nuclear magnetic resonance solid-state quantum computation

    Directory of Open Access Journals (Sweden)

    G. P. Berman

    2003-01-01

    Full Text Available A dynamics of a nuclear-spin quantum computer with a large number (L=1000 of qubits is considered using a perturbation approach. Small parameters are introduced and used to compute the error in an implementation of an entanglement between remote qubits, using a sequence of radio-frequency pulses. The error is computed up to the different orders of the perturbation theory and tested using exact numerical solution.

  12. Data analysis of asymmetric structures advanced approaches in computational statistics

    CERN Document Server

    Saito, Takayuki

    2004-01-01

    Data Analysis of Asymmetric Structures provides a comprehensive presentation of a variety of models and theories for the analysis of asymmetry and its applications and provides a wealth of new approaches in every section. It meets both the practical and theoretical needs of research professionals across a wide range of disciplines and  considers data analysis in fields such as psychology, sociology, social science, ecology, and marketing. In seven comprehensive chapters this guide details theories, methods, and models for the analysis of asymmetric structures in a variety of disciplines and presents future opportunities and challenges affecting research developments and business applications.

  13. Safe manning of merchant ships: an approach and computer tool

    DEFF Research Database (Denmark)

    Alapetite, Alexandre; Kozin, Igor

    2017-01-01

    In the shipping industry, staffing expenses have become a vital competition parameter. In this paper, an approach and a software tool are presented to support decisions on the staffing of merchant ships. The tool is implemented in the form of a Web user interface that makes use of discrete......-event simulation and allows estimation of the workload and of whether different scenarios are successfully performed taking account of the number of crewmembers, watch schedules, distribution of competencies, and others. The software library ‘SimManning’ at the core of the project is provided as open source...

  14. A Novel Approach for ATC Computation in Deregulated Environment

    Directory of Open Access Journals (Sweden)

    C. K. Babulal

    2006-09-01

    Full Text Available This paper presents a novel method for determination of Available Transfer Capability (ATC based on fuzzy logic. Adaptive Neuro-Fuzzy Inference System (ANFIS is used to determine the step length of Homotophy continuation power flow method by considering the values of load bus voltage and change in load bus voltage. The approach is compared with the already available method. The proposed method determines ATC for various transactions by considering thermal limit, voltage limit and static voltage stability limit and tested in WSCC 9 bus system, New England 39 bus system and Indian 181 bus system

  15. Сlassification of methods of production of computer forensic by usage approach of graph theory

    Directory of Open Access Journals (Sweden)

    Anna Ravilyevna Smolina

    2016-06-01

    Full Text Available Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.

  16. Сlassification of methods of production of computer forensic by usage approach of graph theory

    OpenAIRE

    Anna Ravilyevna Smolina; Alexander Alexandrovich Shelupanov

    2016-01-01

    Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.

  17. A Hybrid Computational Intelligence Approach Combining Genetic Programming And Heuristic Classification for Pap-Smear Diagnosis

    DEFF Research Database (Denmark)

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan

    2001-01-01

    The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come...

  18. A survey on computational intelligence approaches for predictive modeling in prostate cancer

    OpenAIRE

    Cosma, G; Brown, D; Archer, M; Khan, M; Pockley, AG

    2017-01-01

    Predictive modeling in medicine involves the development of computational models which are capable of analysing large amounts of data in order to predict healthcare outcomes for individual patients. Computational intelligence approaches are suitable when the data to be modelled are too complex forconventional statistical techniques to process quickly and eciently. These advanced approaches are based on mathematical models that have been especially developed for dealing with the uncertainty an...

  19. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  20. A computational Bayesian approach to dependency assessment in system reliability

    International Nuclear Information System (INIS)

    Yontay, Petek; Pan, Rong

    2016-01-01

    Due to the increasing complexity of engineered products, it is of great importance to develop a tool to assess reliability dependencies among components and systems under the uncertainty of system reliability structure. In this paper, a Bayesian network approach is proposed for evaluating the conditional probability of failure within a complex system, using a multilevel system configuration. Coupling with Bayesian inference, the posterior distributions of these conditional probabilities can be estimated by combining failure information and expert opinions at both system and component levels. Three data scenarios are considered in this study, and they demonstrate that, with the quantification of the stochastic relationship of reliability within a system, the dependency structure in system reliability can be gradually revealed by the data collected at different system levels. - Highlights: • A Bayesian network representation of system reliability is presented. • Bayesian inference methods for assessing dependencies in system reliability are developed. • Complete and incomplete data scenarios are discussed. • The proposed approach is able to integrate reliability information from multiple sources at multiple levels of the system.

  1. A functional analytic approach to computer-interactive mathematics.

    Science.gov (United States)

    Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M; Ninness, Sharon K

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed.

  2. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  3. Granular computing and decision-making interactive and iterative approaches

    CERN Document Server

    Chen, Shyi-Ming

    2015-01-01

    This volume is devoted to interactive and iterative processes of decision-making– I2 Fuzzy Decision Making, in brief. Decision-making is inherently interactive. Fuzzy sets help realize human-machine communication in an efficient way by facilitating a two-way interaction in a friendly and transparent manner. Human-centric interaction is of paramount relevance as a leading guiding design principle of decision support systems.   The volume provides the reader with an updated and in-depth material on the conceptually appealing and practically sound methodology and practice of I2 Fuzzy Decision Making. The book engages a wealth of methods of fuzzy sets and Granular Computing, brings new concepts, architectures and practice of fuzzy decision-making providing the reader with various application studies.   The book is aimed at a broad audience of researchers and practitioners in numerous disciplines in which decision-making processes play a pivotal role and serve as a vehicle to produce solutions to existing prob...

  4. Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach

    Directory of Open Access Journals (Sweden)

    Seth A. Herd

    2013-01-01

    Full Text Available We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC and basal ganglia (BG cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”. The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area.

  5. Accurate evaluation of exchange fields in finite element micromagnetic solvers

    Science.gov (United States)

    Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.

    2012-04-01

    Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.

  6. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  7. Promises and Pitfalls of Computer-Supported Mindfulness: Exploring a Situated Mobile Approach

    Directory of Open Access Journals (Sweden)

    Ralph Vacca

    2017-12-01

    Full Text Available Computer-supported mindfulness (CSM is a burgeoning area filled with varied approaches such as mobile apps and EEG headbands. However, many of the approaches focus on providing meditation guidance. The ubiquity of mobile devices may provide new opportunities to support mindfulness practices that are more situated in everyday life. In this paper, a new situated mindfulness approach is explored through a specific mobile app design. Through an experimental design, the approach is compared to traditional audio-based mindfulness meditation, and a mind wandering control, over a one-week period. The study demonstrates the viability for a situated mobile mindfulness approach to induce mindfulness states. However, phenomenological aspects of the situated mobile approach suggest both promises and pitfalls for computer-supported mindfulness using a situated approach.

  8. Advanced Computational Modeling Approaches for Shock Response Prediction

    Science.gov (United States)

    Derkevorkian, Armen; Kolaini, Ali R.; Peterson, Lee

    2015-01-01

    Motivation: (1) The activation of pyroshock devices such as explosives, separation nuts, pin-pullers, etc. produces high frequency transient structural response, typically from few tens of Hz to several hundreds of kHz. (2) Lack of reliable analytical tools makes the prediction of appropriate design and qualification test levels a challenge. (3) In the past few decades, several attempts have been made to develop methodologies that predict the structural responses to shock environments. (4) Currently, there is no validated approach that is viable to predict shock environments overt the full frequency range (i.e., 100 Hz to 10 kHz). Scope: (1) Model, analyze, and interpret space structural systems with complex interfaces and discontinuities, subjected to shock loads. (2) Assess the viability of a suite of numerical tools to simulate transient, non-linear solid mechanics and structural dynamics problems, such as shock wave propagation.

  9. A zero-dimensional approach to compute real radicals

    Directory of Open Access Journals (Sweden)

    Silke J. Spang

    2008-04-01

    Full Text Available The notion of real radicals is a fundamental tool in Real Algebraic Geometry. It takes the role of the radical ideal in Complex Algebraic Geometry. In this article I shall describe the zero-dimensional approach and efficiency improvement I have found during the work on my diploma thesis at the University of Kaiserslautern (cf. [6]. The main focus of this article is on maximal ideals and the properties they have to fulfil to be real. New theorems and properties about maximal ideals are introduced which yield an heuristic prepare_max which splits the maximal ideals into three classes, namely real, not real and the class where we can't be sure whether they are real or not. For the latter we have to apply a coordinate change into general position until we are sure about realness. Finally this constructs a randomized algorithm for real radicals. The underlying theorems and algorithms are described in detail.

  10. Assessing Power Monitoring Approaches for Energy and Power Analysis of Computers

    OpenAIRE

    El Mehdi Diouria, Mohammed; Dolz Zaragozá, Manuel Francisco; Glückc, Olivier; Lefèvre, Laurent; Alonso, Pedro; Catalán Pallarés, Sandra; Mayo, Rafael; Quintana Ortí, Enrique S.

    2014-01-01

    Large-scale distributed systems (e.g., datacenters, HPC systems, clouds, large-scale networks, etc.) consume and will consume enormous amounts of energy. Therefore, accurately monitoring the power dissipation and energy consumption of these systems is more unavoidable. The main novelty of this contribution is the analysis and evaluation of different external and internal power monitoring devices tested using two different computing systems, a server and a desktop machine. Furthermore, we prov...

  11. Data science in R a case studies approach to computational reasoning and problem solving

    CERN Document Server

    Nolan, Deborah

    2015-01-01

    Effectively Access, Transform, Manipulate, Visualize, and Reason about Data and ComputationData Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions. The book's collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including: Non-standar

  12. An approach to computing marginal land use change carbon intensities for bioenergy in policy applications

    International Nuclear Information System (INIS)

    Wise, Marshall; Hodson, Elke L.; Mignone, Bryan K.; Clarke, Leon; Waldhoff, Stephanie; Luckow, Patrick

    2015-01-01

    Accurately characterizing the emissions implications of bioenergy is increasingly important to the design of regional and global greenhouse gas mitigation policies. Market-based policies, in particular, often use information about carbon intensity to adjust relative deployment incentives for different energy sources. However, the carbon intensity of bioenergy is difficult to quantify because carbon emissions can occur when land use changes to expand production of bioenergy crops rather than simply when the fuel is consumed as for fossil fuels. Using a long-term, integrated assessment model, this paper develops an approach for computing the carbon intensity of bioenergy production that isolates the marginal impact of increasing production of a specific bioenergy crop in a specific region, taking into account economic competition among land uses. We explore several factors that affect emissions intensity and explain these results in the context of previous studies that use different approaches. Among the factors explored, our results suggest that the carbon intensity of bioenergy production from land use change (LUC) differs by a factor of two depending on the region in which the bioenergy crop is grown in the United States. Assumptions about international land use policies (such as those related to forest protection) and crop yields also significantly impact carbon intensity. Finally, we develop and demonstrate a generalized method for considering the varying time profile of LUC emissions from bioenergy production, taking into account the time path of future carbon prices, the discount rate and the time horizon. When evaluated in the context of power sector applications, we found electricity from bioenergy crops to be less carbon-intensive than conventional coal-fired electricity generation and often less carbon-intensive than natural-gas fired generation. - Highlights: • Modeling methodology for assessing land use change emissions from bioenergy • Use GCAM

  13. Effects of artificial gravity on the cardiovascular system: Computational approach

    Science.gov (United States)

    Diaz Artiles, Ana; Heldt, Thomas; Young, Laurence R.

    2016-09-01

    steady-state cardiovascular behavior during sustained artificial gravity and exercise. Further validation of the model was performed using experimental data from the combined exercise and artificial gravity experiments conducted on the MIT CRC, and these results will be presented separately in future publications. This unique computational framework can be used to simulate a variety of centrifuge configuration and exercise intensities to improve understanding and inform decisions about future implementation of artificial gravity in space.

  14. Implementation of a Novel Educational Modeling Approach for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sara Ouahabi

    2014-12-01

    Full Text Available The Cloud model is cost-effective because customers pay for their actual usage without upfront costs, and scalable because it can be used more or less depending on the customers’ needs. Due to its advantages, Cloud has been increasingly adopted in many areas, such as banking, e-commerce, retail industry, and academy. For education, cloud is used to manage the large volume of educational resources produced across many universities in the cloud. Keep interoperability between content in an inter-university Cloud is not always easy. Diffusion of pedagogical contents on the Cloud by different E-Learning institutions leads to heterogeneous content which influence the quality of teaching offered by university to teachers and learners. From this reason, comes the idea of using IMS-LD coupled with metadata in the cloud. This paper presents the implementation of our previous educational modeling by combining an application in J2EE with Reload editor that consists of modeling heterogeneous content in the cloud. The new approach that we followed focuses on keeping interoperability between Educational Cloud content for teachers and learners and facilitates the task of identification, reuse, sharing, adapting teaching and learning resources in the Cloud.

  15. A Hybrid Soft Computing Approach for Subset Problems

    Directory of Open Access Journals (Sweden)

    Broderick Crawford

    2013-01-01

    Full Text Available Subset problems (set partitioning, packing, and covering are formal models for many practical optimization problems. A set partitioning problem determines how the items in one set (S can be partitioned into smaller subsets. All items in S must be contained in one and only one partition. Related problems are set packing (all items must be contained in zero or one partitions and set covering (all items must be contained in at least one partition. Here, we present a hybrid solver based on ant colony optimization (ACO combined with arc consistency for solving this kind of problems. ACO is a swarm intelligence metaheuristic inspired on ants behavior when they search for food. It allows to solve complex combinatorial problems for which traditional mathematical techniques may fail. By other side, in constraint programming, the solving process of Constraint Satisfaction Problems can dramatically reduce the search space by means of arc consistency enforcing constraint consistencies either prior to or during search. Our hybrid approach was tested with set covering and set partitioning dataset benchmarks. It was observed that the performance of ACO had been improved embedding this filtering technique in its constructive phase.

  16. Driving profile modeling and recognition based on soft computing approach.

    Science.gov (United States)

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers.

  17. Computational model of precision grip in Parkinson’s disease: A Utility based approach

    Directory of Open Access Journals (Sweden)

    Ankur eGupta

    2013-12-01

    Full Text Available We propose a computational model of Precision Grip (PG performance in normal subjects and Parkinson’s Disease (PD patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Fellows et al 1998; Ingvarsson et al 1997. Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: 1 the sensory-motor loop component, and 2 the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the precision grip results from normal and PD patients accurately (Fellows et. al. 1998; Ingvarsson et. al. 1997. To our knowledge the model is the first model of precision grip in PD conditions.

  18. An Augmented Incomplete Factorization Approach for Computing the Schur Complement in Stochastic Optimization

    KAUST Repository

    Petra, Cosmin G.; Schenk, Olaf; Lubin, Miles; Gä ertner, Klaus

    2014-01-01

    We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.

  19. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  20. A Knowledge Engineering Approach to Developing Educational Computer Games for Improving Students' Differentiating Knowledge

    Science.gov (United States)

    Hwang, Gwo-Jen; Sung, Han-Yu; Hung, Chun-Ming; Yang, Li-Hsueh; Huang, Iwen

    2013-01-01

    Educational computer games have been recognized as being a promising approach for motivating students to learn. Nevertheless, previous studies have shown that without proper learning strategies or supportive models, the learning achievement of students might not be as good as expected. In this study, a knowledge engineering approach is proposed…

  1. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  2. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  3. Accurate calculation of mutational effects on the thermodynamics of inhibitor binding to p38α MAP kinase: a combined computational and experimental study.

    Science.gov (United States)

    Zhu, Shun; Travis, Sue M; Elcock, Adrian H

    2013-07-09

    A major current challenge for drug design efforts focused on protein kinases is the development of drug resistance caused by spontaneous mutations in the kinase catalytic domain. The ubiquity of this problem means that it would be advantageous to develop fast, effective computational methods that could be used to determine the effects of potential resistance-causing mutations before they arise in a clinical setting. With this long-term goal in mind, we have conducted a combined experimental and computational study of the thermodynamic effects of active-site mutations on a well-characterized and high-affinity interaction between a protein kinase and a small-molecule inhibitor. Specifically, we developed a fluorescence-based assay to measure the binding free energy of the small-molecule inhibitor, SB203580, to the p38α MAP kinase and used it measure the inhibitor's affinity for five different kinase mutants involving two residues (Val38 and Ala51) that contact the inhibitor in the crystal structure of the inhibitor-kinase complex. We then conducted long, explicit-solvent thermodynamic integration (TI) simulations in an attempt to reproduce the experimental relative binding affinities of the inhibitor for the five mutants; in total, a combined simulation time of 18.5 μs was obtained. Two widely used force fields - OPLS-AA/L and Amber ff99SB-ILDN - were tested in the TI simulations. Both force fields produced excellent agreement with experiment for three of the five mutants; simulations performed with the OPLS-AA/L force field, however, produced qualitatively incorrect results for the constructs that contained an A51V mutation. Interestingly, the discrepancies with the OPLS-AA/L force field could be rectified by the imposition of position restraints on the atoms of the protein backbone and the inhibitor without destroying the agreement for other mutations; the ability to reproduce experiment depended, however, upon the strength of the restraints' force constant

  4. Automated Extraction of Cranial Landmarks from Computed Tomography Data using a Combined Method of Knowledge and Pattern Based Approaches

    Directory of Open Access Journals (Sweden)

    Roshan N. RAJAPAKSE

    2016-03-01

    Full Text Available Accurate identification of anatomical structures from medical imaging data is a significant and critical function in the medical domain. Past studies in this context have mainly utilized two main approaches, the knowledge and learning methodologies based methods. Further, most of previous reported studies have focused on identification of landmarks from lateral X-ray Computed Tomography (CT data, particularly in the field of orthodontics. However, this study focused on extracting cranial landmarks from large sets of cross sectional CT slices using a combined method of the two aforementioned approaches. The proposed method of this study is centered mainly on template data sets, which were created using the actual contour patterns extracted from CT cases for each of the landmarks in consideration. Firstly, these templates were used to devise rules which are a characteristic of the knowledge based method. Secondly, the same template sets were employed to perform template matching related to the learning methodologies approach. The proposed method was tested on two landmarks, the Dorsum sellae and the Pterygoid plate, using CT cases of 5 subjects. The results indicate that, out of the 10 tests, the output images were within the expected range (desired accuracy in 7 instances and acceptable range (near accuracy for 2 instances, thus verifying the effectiveness of the combined template sets centric approach proposed in this study.

  5. Multiplex-PCR-Based Screening and Computational Modeling of Virulence Factors and T-Cell Mediated Immunity in Helicobacter pylori Infections for Accurate Clinical Diagnosis.

    Directory of Open Access Journals (Sweden)

    Sinem Oktem-Okullu

    Full Text Available The outcome of H. pylori infection is closely related with bacteria's virulence factors and host immune response. The association between T cells and H. pylori infection has been identified, but the effects of the nine major H. pylori specific virulence factors; cagA, vacA, oipA, babA, hpaA, napA, dupA, ureA, ureB on T cell response in H. pylori infected patients have not been fully elucidated. We developed a multiplex- PCR assay to detect nine H. pylori virulence genes with in a three PCR reactions. Also, the expression levels of Th1, Th17 and Treg cell specific cytokines and transcription factors were detected by using qRT-PCR assays. Furthermore, a novel expert derived model is developed to identify set of factors and rules that can distinguish the ulcer patients from gastritis patients. Within all virulence factors that we tested, we identified a correlation between the presence of napA virulence gene and ulcer disease as a first data. Additionally, a positive correlation between the H. pylori dupA virulence factor and IFN-γ, and H. pylori babA virulence factor and IL-17 was detected in gastritis and ulcer patients respectively. By using computer-based models, clinical outcomes of a patients infected with H. pylori can be predicted by screening the patient's H. pylori vacA m1/m2, ureA and cagA status and IFN-γ (Th1, IL-17 (Th17, and FOXP3 (Treg expression levels. Herein, we report, for the first time, the relationship between H. pylori virulence factors and host immune responses for diagnostic prediction of gastric diseases using computer-based models.

  6. Multiplex-PCR-Based Screening and Computational Modeling of Virulence Factors and T-Cell Mediated Immunity in Helicobacter pylori Infections for Accurate Clinical Diagnosis.

    Science.gov (United States)

    Oktem-Okullu, Sinem; Tiftikci, Arzu; Saruc, Murat; Cicek, Bahattin; Vardareli, Eser; Tozun, Nurdan; Kocagoz, Tanil; Sezerman, Ugur; Yavuz, Ahmet Sinan; Sayi-Yazgan, Ayca

    2015-01-01

    The outcome of H. pylori infection is closely related with bacteria's virulence factors and host immune response. The association between T cells and H. pylori infection has been identified, but the effects of the nine major H. pylori specific virulence factors; cagA, vacA, oipA, babA, hpaA, napA, dupA, ureA, ureB on T cell response in H. pylori infected patients have not been fully elucidated. We developed a multiplex- PCR assay to detect nine H. pylori virulence genes with in a three PCR reactions. Also, the expression levels of Th1, Th17 and Treg cell specific cytokines and transcription factors were detected by using qRT-PCR assays. Furthermore, a novel expert derived model is developed to identify set of factors and rules that can distinguish the ulcer patients from gastritis patients. Within all virulence factors that we tested, we identified a correlation between the presence of napA virulence gene and ulcer disease as a first data. Additionally, a positive correlation between the H. pylori dupA virulence factor and IFN-γ, and H. pylori babA virulence factor and IL-17 was detected in gastritis and ulcer patients respectively. By using computer-based models, clinical outcomes of a patients infected with H. pylori can be predicted by screening the patient's H. pylori vacA m1/m2, ureA and cagA status and IFN-γ (Th1), IL-17 (Th17), and FOXP3 (Treg) expression levels. Herein, we report, for the first time, the relationship between H. pylori virulence factors and host immune responses for diagnostic prediction of gastric diseases using computer-based models.

  7. Accurate estimation of global and regional cardiac function by retrospectively gated multidetector row computed tomography. Comparison with cine magnetic resonance imaging

    International Nuclear Information System (INIS)

    Belge, Benedicte; Pasquet, Agnes; Vanoverschelde, Jean-Louis J.; Coche, Emmanuel; Gerber, Bernhard L.

    2006-01-01

    Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134±51 and 67±56 ml) were similar to those by MR (137±57 and 70±60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55±21 vs. 56±21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3±1.8 vs. 8.8±1.9 mm and 12.7±3.4 vs. 13.3±3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54±30 vs. 51±31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR. (orig.)

  8. Cultural Distance-Aware Service Recommendation Approach in Mobile Edge Computing

    Directory of Open Access Journals (Sweden)

    Yan Li

    2018-01-01

    Full Text Available In the era of big data, traditional computing systems and paradigms are not efficient and even difficult to use. For high performance big data processing, mobile edge computing is emerging as a complement framework of cloud computing. In this new computing architecture, services are provided within a close proximity of mobile users by servers at the edge of network. Traditional collaborative filtering recommendation approach only focuses on the similarity extracted from the rating data, which may lead to an inaccuracy expression of user preference. In this paper, we propose a cultural distance-aware service recommendation approach which focuses on not only the similarity but also the local characteristics and preference of users. Our approach employs the cultural distance to express the user preference and combines it with similarity to predict the user ratings and recommend the services with higher rating. In addition, considering the extreme sparsity of the rating data, missing rating prediction based on collaboration filtering is introduced in our approach. The experimental results based on real-world datasets show that our approach outperforms the traditional recommendation approaches in terms of the reliability of recommendation.

  9. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  10. Medical imaging in clinical applications algorithmic and computer-based approaches

    CERN Document Server

    Bhateja, Vikrant; Hassanien, Aboul

    2016-01-01

    This volume comprises of 21 selected chapters, including two overview chapters devoted to abdominal imaging in clinical applications supported computer aided diagnosis approaches as well as different techniques for solving the pectoral muscle extraction problem in the preprocessing part of the CAD systems for detecting breast cancer in its early stage using digital mammograms. The aim of this book is to stimulate further research in medical imaging applications based algorithmic and computer based approaches and utilize them in real-world clinical applications. The book is divided into four parts, Part-I: Clinical Applications of Medical Imaging, Part-II: Classification and clustering, Part-III: Computer Aided Diagnosis (CAD) Tools and Case Studies and Part-IV: Bio-inspiring based Computer Aided diagnosis techniques. .

  11. Computer based virtual reality approach towards its application in an accidental emergency at nuclear power plant

    International Nuclear Information System (INIS)

    Yan Jun; Yao Qingshan

    1999-01-01

    Virtual reality is a computer based system for creating and receiving virtual world. As an emerging branch of computer discipline, this approach is extensively expanding and widely used in variety of industries such as national defence, research, engineering, medicine and air navigation. The author intends to present the fundamentals of virtual reality, in attempt to study some interested aspects for use in nuclear power emergency planning

  12. A Representational Approach to Knowledge and Multiple Skill Levels for Broad Classes of Computer Generated Forces

    Science.gov (United States)

    1997-12-01

    that I’ll turn my attention to that computer game we’ve talked so much about... Dave Van Veldhuizen and Scott Brown (soon-to-be Drs. Van Veldhuizen and...Industry Training Systems Conference. 1988. 37. Van Veldhuizen , D. A. and L. J Hutson. "A Design Methodology for Domain Inde- pendent Computer...proposed by Van Veld- huizen and Hutson (37), extends the general architecture to support both a domain- independent approach to implementing CGFs and

  13. D-Wave's Approach to Quantum Computing: 1000-qubits and Counting!

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    In this talk I will describe D-Wave's approach to quantum computing, including the system architecture of our 1000-qubit D-Wave 2X, its programming model, and performance benchmarks. Furthermore, I will describe how the native optimization and sampling capabilities of the quantum processor can be exploited to tackle problems in a variety of fields including medicine, machine learning, physics, and computational finance.

  14. Mathematics of shape description a morphological approach to image processing and computer graphics

    CERN Document Server

    Ghosh, Pijush K

    2009-01-01

    Image processing problems are often not well defined because real images are contaminated with noise and other uncertain factors. In Mathematics of Shape Description, the authors take a mathematical approach to address these problems using the morphological and set-theoretic approach to image processing and computer graphics by presenting a simple shape model using two basic shape operators called Minkowski addition and decomposition. This book is ideal for professional researchers and engineers in Information Processing, Image Measurement, Shape Description, Shape Representation and Computer Graphics. Post-graduate and advanced undergraduate students in pure and applied mathematics, computer sciences, robotics and engineering will also benefit from this book.  Key FeaturesExplains the fundamental and advanced relationships between algebraic system and shape description through the set-theoretic approachPromotes interaction of image processing geochronology and mathematics in the field of algebraic geometryP...

  15. A Cognitive Computing Approach for Classification of Complaints in the Insurance Industry

    Science.gov (United States)

    Forster, J.; Entrup, B.

    2017-10-01

    In this paper we present and evaluate a cognitive computing approach for classification of dissatisfaction and four complaint specific complaint classes in correspondence documents between insurance clients and an insurance company. A cognitive computing approach includes the combination classical natural language processing methods, machine learning algorithms and the evaluation of hypothesis. The approach combines a MaxEnt machine learning algorithm with language modelling, tf-idf and sentiment analytics to create a multi-label text classification model. The result is trained and tested with a set of 2500 original insurance communication documents written in German, which have been manually annotated by the partnering insurance company. With a F1-Score of 0.9, a reliable text classification component has been implemented and evaluated. A final outlook towards a cognitive computing insurance assistant is given in the end.

  16. Anatomical and computed tomographic analysis of the transcochlear and endoscopic transclival approaches to the petroclival region.

    Science.gov (United States)

    Mason, Eric; Van Rompaey, Jason; Carrau, Ricardo; Panizza, Benedict; Solares, C Arturo

    2014-03-01

    Advances in the field of skull base surgery aim to maximize anatomical exposure while minimizing patient morbidity. The petroclival region of the skull base presents numerous challenges for surgical access due to the complex anatomy. The transcochlear approach to the region provides adequate access; however, the resection involved sacrifices hearing and results in at least a grade 3 facial palsy. An endoscopic endonasal approach could potentially avoid negative patient outcomes while providing a desirable surgical window in a select patient population. Cadaveric study. Endoscopic access to the petroclival region was achieved through an endonasal approach. For comparison, a transcochlear approach to the clivus was performed. Different facets of the dissections, such as bone removal volume and exposed surface area, were computed using computed tomography analysis. The endoscopic endonasal approach provided a sufficient corridor to the petroclival region with significantly less bone removal and nearly equivalent exposure of the surgical target, thus facilitating the identification of the relevant anatomy. The lateral approach allowed for better exposure from a posterolateral direction until the inferior petrosal sinus; however, the endonasal approach avoided labyrinthine/cochlear destruction and facial nerve manipulation while providing an anteromedial viewpoint. The endonasal approach also avoided external incisions and cosmetic deficits. The endonasal approach required significant sinonasal resection. Endoscopic access to the petroclival region is a feasible approach. It potentially avoids hearing loss, facial nerve manipulation, and cosmetic damage. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  17. A Randomized Rounding Approach for Optimization of Test Sheet Composing and Exposure Rate Control in Computer-Assisted Testing

    Science.gov (United States)

    Wang, Chu-Fu; Lin, Chih-Lung; Deng, Jien-Han

    2012-01-01

    Testing is an important stage of teaching as it can assist teachers in auditing students' learning results. A good test is able to accurately reflect the capability of a learner. Nowadays, Computer-Assisted Testing (CAT) is greatly improving traditional testing, since computers can automatically and quickly compose a proper test sheet to meet user…

  18. Accurate pre-surgical determination for self-drilling miniscrew implant placement using surgical guides and cone-beam computed tomography.

    Science.gov (United States)

    Miyazawa, Ken; Kawaguchi, Misuzu; Tabuchi, Masako; Goto, Shigemi

    2010-12-01

    Miniscrew implants have proven to be effective in providing absolute orthodontic anchorage. However, as self-drilling miniscrew implants have become more popular, a problem has emerged, i.e. root contact, which can lead to perforation and other root injuries. To avoid possible root damage, a surgical guide was fabricated and cone-beam computed tomography (CBCT) was used to incorporate guide tubes drilled in accordance with the planned direction of the implants. Eighteen patients (5 males and 13 females; mean age 23.8 years; minimum 10.7, maximum 45.5) were included in the study. Forty-four self-drilling miniscrew implants (diameter 1.6, and length 8 mm) were placed in interradicular bone using a surgical guide procedure, the majority in the maxillary molar area. To determine the success rates, statistical analysis was undertaken using Fisher's exact probability test. CBCT images of post-surgical self-drilling miniscrew implant placement showed no root contact (0/44). However, based on CBCT evaluation, it was necessary to change the location or angle of 52.3 per cent (23/44) of the guide tubes prior to surgery in order to obtain optimal placement. If orthodontic force could be applied to the screw until completion of orthodontic treatment, screw anchorage was recorded as successful. The total success rate of all miniscrews was 90.9 per cent (40/44). Orthodontic self-drilling miniscrew implants must be inserted carefully, particularly in the case of blind placement, since even guide tubes made on casts frequently require repositioning to avoid the roots of the teeth. The use of surgical guides, fabricated using CBCT images, appears to be a promising technique for placement of orthodontic self-drilling miniscrew implants adjacent to the dental roots and maxillary sinuses.

  19. The soft computing-based approach to investigate allergic diseases: a systematic review.

    Science.gov (United States)

    Tartarisco, Gennaro; Tonacci, Alessandro; Minciullo, Paola Lucia; Billeci, Lucia; Pioggia, Giovanni; Incorvaia, Cristoforo; Gangemi, Sebastiano

    2017-01-01

    Early recognition of inflammatory markers and their relation to asthma, adverse drug reactions, allergic rhinitis, atopic dermatitis and other allergic diseases is an important goal in allergy. The vast majority of studies in the literature are based on classic statistical methods; however, developments in computational techniques such as soft computing-based approaches hold new promise in this field. The aim of this manuscript is to systematically review the main soft computing-based techniques such as artificial neural networks, support vector machines, bayesian networks and fuzzy logic to investigate their performances in the field of allergic diseases. The review was conducted following PRISMA guidelines and the protocol was registered within PROSPERO database (CRD42016038894). The research was performed on PubMed and ScienceDirect, covering the period starting from September 1, 1990 through April 19, 2016. The review included 27 studies related to allergic diseases and soft computing performances. We observed promising results with an overall accuracy of 86.5%, mainly focused on asthmatic disease. The review reveals that soft computing-based approaches are suitable for big data analysis and can be very powerful, especially when dealing with uncertainty and poorly characterized parameters. Furthermore, they can provide valuable support in case of lack of data and entangled cause-effect relationships, which make it difficult to assess the evolution of disease. Although most works deal with asthma, we believe the soft computing approach could be a real breakthrough and foster new insights into other allergic diseases as well.

  20. A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises

    DEFF Research Database (Denmark)

    Kuada, Eric; Olesen, Henning

    2011-01-01

    This paper proposes a social network approach to the provisioning and management of cloud computing services termed Opportunistic Cloud Computing Services (OCCS), for enterprises; and presents the research issues that need to be addressed for its implementation. We hypothesise that OCCS...... will facilitate the adoption process of cloud computing services by enterprises. OCCS deals with the concept of enterprises taking advantage of cloud computing services to meet their business needs without having to pay or paying a minimal fee for the services. The OCCS network will be modelled and implemented...... as a social network of enterprises collaborating strategically for the provisioning and consumption of cloud computing services without entering into any business agreements. We conclude that it is possible to configure current cloud service technologies and management tools for OCCS but there is a need...

  1. Medium-term generation programming in competitive environments: a new optimisation approach for market equilibrium computing

    International Nuclear Information System (INIS)

    Barquin, J.; Centeno, E.; Reneses, J.

    2004-01-01

    The paper proposes a model to represent medium-term hydro-thermal operation of electrical power systems in deregulated frameworks. The model objective is to compute the oligopolistic market equilibrium point in which each utility maximises its profit, based on other firms' behaviour. This problem is not an optimisation one. The main contribution of the paper is to demonstrate that, nevertheless, under some reasonable assumptions, it can be formulated as an equivalent minimisation problem. A computer program has been coded by using the proposed approach. It is used to compute the market equilibrium of a real-size system. (author)

  2. A Crisis Management Approach To Mission Survivability In Computational Multi-Agent Systems

    Directory of Open Access Journals (Sweden)

    Aleksander Byrski

    2010-01-01

    Full Text Available In this paper we present a biologically-inspired approach for mission survivability (consideredas the capability of fulfilling a task such as computation that allows the system to be aware ofthe possible threats or crises that may arise. This approach uses the notion of resources usedby living organisms to control their populations.We present the concept of energetic selectionin agent-based evolutionary systems as well as the means to manipulate the configuration ofthe computation according to the crises or user’s specific demands.

  3. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems

    Science.gov (United States)

    Junge, Oliver; Kevrekidis, Ioannis G.

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  4. Novel computational methods to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-12-01

    Computational drug repurposing aims at finding new medical uses for existing drugs. The identification of novel drug-target interactions (DTIs) can be a useful part of such a task. Computational determination of DTIs is a convenient strategy for systematic screening of a large number of drugs in the attempt to identify new DTIs at low cost and with reasonable accuracy. This necessitates development of accurate computational methods that can help focus on the follow-up experimental validation on a smaller number of highly likely targets for a drug. Although many methods have been proposed for computational DTI prediction, they suffer the high false positive prediction rate or they do not predict the effect that drugs exert on targets in DTIs. In this report, first, we present a comprehensive review of the recent progress in the field of DTI prediction from data-centric and algorithm-centric perspectives. The aim is to provide a comprehensive review of computational methods for identifying DTIs, which could help in constructing more reliable methods. Then, we present DDR, an efficient method to predict the existence of DTIs. DDR achieves significantly more accurate results compared to the other state-of-theart methods. As supported by independent evidences, we verified as correct 22 out of the top 25 DDR DTIs predictions. This validation proves the practical utility of DDR, suggesting that DDR can be used as an efficient method to identify 5 correct DTIs. Finally, we present DDR-FE method that predicts the effect types of a drug on its target. On different representative datasets, under various test setups, and using different performance measures, we show that DDR-FE achieves extremely good performance. Using blind test data, we verified as correct 2,300 out of 3,076 DTIs effects predicted by DDR-FE. This suggests that DDR-FE can be used as an efficient method to identify correct effects of a drug on its target.

  5. A Computer-Aided FPS-Oriented Approach for Construction Briefing

    Institute of Scientific and Technical Information of China (English)

    Xiaochun Luo; Qiping Shen

    2008-01-01

    Function performance specification (FPS) is one of the value management (VM) techniques de- veloped for the explicit statement of optimum product definition. This technique is widely used in software engineering and manufacturing industry, and proved to be successful to perform product defining tasks. This paper describes an FPS-odented approach for construction briefing, which is critical to the successful deliv- ery of construction projects. Three techniques, i.e., function analysis system technique, shared space, and computer-aided toolkit, are incorporated into the proposed approach. A computer-aided toolkit is developed to facilitate the implementation of FPS in the briefing processes. This approach can facilitate systematic, ef- ficient identification, clarification, and representation of client requirements in trail running. The limitations of the approach and future research work are also discussed at the end of the paper.

  6. Linking Individual Learning Styles to Approach-Avoidance Motivational Traits and Computational Aspects of Reinforcement Learning.

    Directory of Open Access Journals (Sweden)

    Kristoffer Carl Aberg

    Full Text Available Learning how to gain rewards (approach learning and avoid punishments (avoidance learning is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance learning scored higher on measures of approach (vs. avoidance trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits.

  7. Linking Individual Learning Styles to Approach-Avoidance Motivational Traits and Computational Aspects of Reinforcement Learning

    Science.gov (United States)

    Carl Aberg, Kristoffer; Doell, Kimberly C.; Schwartz, Sophie

    2016-01-01

    Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits. PMID:27851807

  8. Predicting Transport of 3,5,6-Trichloro-2-Pyridinol Into Saliva Using a Combination Experimental and Computational Approach.

    Science.gov (United States)

    Smith, Jordan Ned; Carver, Zana A; Weber, Thomas J; Timchalk, Charles

    2017-06-01

    A combination experimental and computational approach was developed to predict chemical transport into saliva. A serous-acinar chemical transport assay was established to measure chemical transport with nonphysiological (standard cell culture medium) and physiological (using surrogate plasma and saliva medium) conditions using 3,5,6-trichloro-2-pyridinol (TCPy) a metabolite of the pesticide chlorpyrifos. High levels of TCPy protein binding were observed in cell culture medium and rat plasma resulting in different TCPy transport behaviors in the 2 experimental conditions. In the nonphysiological transport experiment, TCPy reached equilibrium at equivalent concentrations in apical and basolateral chambers. At higher TCPy doses, increased unbound TCPy was observed, and TCPy concentrations in apical and basolateral chambers reached equilibrium faster than lower doses, suggesting only unbound TCPy is able to cross the cellular monolayer. In the physiological experiment, TCPy transport was slower than nonphysiological conditions, and equilibrium was achieved at different concentrations in apical and basolateral chambers at a comparable ratio (0.034) to what was previously measured in rats dosed with TCPy (saliva:blood ratio: 0.049). A cellular transport computational model was developed based on TCPy protein binding kinetics and simulated all transport experiments reasonably well using different permeability coefficients for the 2 experimental conditions (1.14 vs 0.4 cm/h for nonphysiological and physiological experiments, respectively). The computational model was integrated into a physiologically based pharmacokinetic model and accurately predicted TCPy concentrations in saliva of rats dosed with TCPy. Overall, this study demonstrates an approach to predict chemical transport in saliva, potentially increasing the utility of salivary biomonitoring in the future. © The Author 2017. Published by Oxford University Press on behalf of the Society of Toxicology. All rights

  9. A New Approach to Practical Active-Secure Two-Party Computation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio

    2012-01-01

    We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao’s garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...... a number of novel techniques for relating the outputs and inputs of OTs in a larger construction....

  10. Artificial intelligence and tutoring systems computational and cognitive approaches to the communication of knowledge

    CERN Document Server

    Wenger, Etienne

    2014-01-01

    Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge focuses on the cognitive approaches, methodologies, principles, and concepts involved in the communication of knowledge. The publication first elaborates on knowledge communication systems, basic issues, and tutorial dialogues. Concerns cover natural reasoning and tutorial dialogues, shift from local strategies to multiple mental models, domain knowledge, pedagogical knowledge, implicit versus explicit encoding of knowledge, knowledge communication, and practical and theoretic

  11. A dynamically adaptive wavelet approach to stochastic computations based on polynomial chaos - capturing all scales of random modes on independent grids

    International Nuclear Information System (INIS)

    Ren Xiaoan; Wu Wenquan; Xanthis, Leonidas S.

    2011-01-01

    Highlights: → New approach for stochastic computations based on polynomial chaos. → Development of dynamically adaptive wavelet multiscale solver using space refinement. → Accurate capture of steep gradients and multiscale features in stochastic problems. → All scales of each random mode are captured on independent grids. → Numerical examples demonstrate the need for different space resolutions per mode. - Abstract: In stochastic computations, or uncertainty quantification methods, the spectral approach based on the polynomial chaos expansion in random space leads to a coupled system of deterministic equations for the coefficients of the expansion. The size of this system increases drastically when the number of independent random variables and/or order of polynomial chaos expansions increases. This is invariably the case for large scale simulations and/or problems involving steep gradients and other multiscale features; such features are variously reflected on each solution component or random/uncertainty mode requiring the development of adaptive methods for their accurate resolution. In this paper we propose a new approach for treating such problems based on a dynamically adaptive wavelet methodology involving space-refinement on physical space that allows all scales of each solution component to be refined independently of the rest. We exemplify this using the convection-diffusion model with random input data and present three numerical examples demonstrating the salient features of the proposed method. Thus we establish a new, elegant and flexible approach for stochastic problems with steep gradients and multiscale features based on polynomial chaos expansions.

  12. A comparative approach to computer aided design model of a dog femur.

    Science.gov (United States)

    Turamanlar, O; Verim, O; Karabulut, A

    2016-01-01

    Computer assisted technologies offer new opportunities in medical imaging and rapid prototyping in biomechanical engineering. Three dimensional (3D) modelling of soft tissues and bones are becoming more important. The accuracy of the analysis in modelling processes depends on the outline of the tissues derived from medical images. The aim of this study is the evaluation of the accuracy of 3D models of a dog femur derived from computed tomography data by using point cloud method and boundary line method on several modelling software. Solidworks, Rapidform and 3DSMax software were used to create 3D models and outcomes were evaluated statistically. The most accurate 3D prototype of the dog femur was created with stereolithography method using rapid prototype device. Furthermore, the linearity of the volumes of models was investigated between software and the constructed models. The difference between the software and real models manifests the sensitivity of the software and the devices used in this manner.

  13. Advanced approaches to characterize the human intestinal microbiota by computational meta-analysis

    NARCIS (Netherlands)

    Nikkilä, J.; Vos, de W.M.

    2010-01-01

    GOALS: We describe advanced approaches for the computational meta-analysis of a collection of independent studies, including over 1000 phylogenetic array datasets, as a means to characterize the variability of human intestinal microbiota. BACKGROUND: The human intestinal microbiota is a complex

  14. Can Computers Be Used for Whole Language Approaches to Reading and Language Arts?

    Science.gov (United States)

    Balajthy, Ernest

    Holistic approaches to the teaching of reading and writing, most notably the Whole Language movement, reject the philosophy that language skills can be taught. Instead, holistic teachers emphasize process, and they structure the students' classroom activities to be rich in language experience. Computers can be used as tools for whole language…

  15. How people learn while playing serious games: A computational modelling approach

    NARCIS (Netherlands)

    Westera, Wim

    2017-01-01

    This paper proposes a computational modelling approach for investigating the interplay of learning and playing in serious games. A formal model is introduced that allows for studying the details of playing a serious game under diverse conditions. The dynamics of player action and motivation is based

  16. Relationships among Taiwanese Children's Computer Game Use, Academic Achievement and Parental Governing Approach

    Science.gov (United States)

    Yeh, Duen-Yian; Cheng, Ching-Hsue

    2016-01-01

    This study examined the relationships among children's computer game use, academic achievement and parental governing approach to propose probable answers for the doubts of Taiwanese parents. 355 children (ages 11-14) were randomly sampled from 20 elementary schools in a typically urbanised county in Taiwan. Questionnaire survey (five questions)…

  17. A Computer-Based Game That Promotes Mathematics Learning More than a Conventional Approach

    Science.gov (United States)

    McLaren, Bruce M.; Adams, Deanne M.; Mayer, Richard E.; Forlizzi, Jodi

    2017-01-01

    Excitement about learning from computer-based games has been papable in recent years and has led to the development of many educational games. However, there are relatively few sound empirical studies in the scientific literature that have shown the benefits of learning mathematics from games as opposed to more traditional approaches. The…

  18. Computer simulation of HTGR fuel microspheres using a Monte-Carlo statistical approach

    International Nuclear Information System (INIS)

    Hedrick, C.E.

    1976-01-01

    The concept and computational aspects of a Monte-Carlo statistical approach in relating structure of HTGR fuel microspheres to the uranium content of fuel samples have been verified. Results of the preliminary validation tests and the benefits to be derived from the program are summarized

  19. Approach to Computer Implementation of Mathematical Model of 3-Phase Induction Motor

    Science.gov (United States)

    Pustovetov, M. Yu

    2018-03-01

    This article discusses the development of the computer model of an induction motor based on the mathematical model in a three-phase stator reference frame. It uses an approach that allows combining during preparation of the computer model dual methods: means of visual programming circuitry (in the form of electrical schematics) and logical one (in the form of block diagrams). The approach enables easy integration of the model of an induction motor as part of more complex models of electrical complexes and systems. The developed computer model gives the user access to the beginning and the end of a winding of each of the three phases of the stator and rotor. This property is particularly important when considering the asymmetric modes of operation or when powered by the special circuitry of semiconductor converters.

  20. Combinatorial computational chemistry approach to the design of metal catalysts for deNOx

    International Nuclear Information System (INIS)

    Endou, Akira; Jung, Changho; Kusagaya, Tomonori; Kubo, Momoji; Selvam, Parasuraman; Miyamoto, Akira

    2004-01-01

    Combinatorial chemistry is an efficient technique for the synthesis and screening of a large number of compounds. Recently, we introduced the combinatorial approach to computational chemistry for catalyst design and proposed a new method called ''combinatorial computational chemistry''. In the present study, we have applied this combinatorial computational chemistry approach to the design of precious metal catalysts for deNO x . As the first step of the screening of the metal catalysts, we studied Rh, Pd, Ag, Ir, Pt, and Au clusters regarding the adsorption properties towards NO molecule. It was demonstrated that the energetically most stable adsorption state of NO on Ir model cluster, which was irrespective of both the shape and number of atoms including the model clusters

  1. Computational Approaches to the Chemical Equilibrium Constant in Protein-ligand Binding.

    Science.gov (United States)

    Montalvo-Acosta, Joel José; Cecchini, Marco

    2016-12-01

    The physiological role played by protein-ligand recognition has motivated the development of several computational approaches to the ligand binding affinity. Some of them, termed rigorous, have a strong theoretical foundation but involve too much computation to be generally useful. Some others alleviate the computational burden by introducing strong approximations and/or empirical calibrations, which also limit their general use. Most importantly, there is no straightforward correlation between the predictive power and the level of approximation introduced. Here, we present a general framework for the quantitative interpretation of protein-ligand binding based on statistical mechanics. Within this framework, we re-derive self-consistently the fundamental equations of some popular approaches to the binding constant and pinpoint the inherent approximations. Our analysis represents a first step towards the development of variants with optimum accuracy/efficiency ratio for each stage of the drug discovery pipeline. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Assessment of an extended Nijboer-Zernike approach for the computation of optical point-spread functions.

    Science.gov (United States)

    Braat, Joseph; Dirksen, Peter; Janssen, Augustus J E M

    2002-05-01

    We assess the validity of an extended Nijboer-Zernike approach [J. Opt. Soc. Am. A 19, 849 (2002)], based on ecently found Bessel-series representations of diffraction integrals comprising an arbitrary aberration and a defocus part, for the computation of optical point-spread functions of circular, aberrated optical systems. These new series representations yield a flexible means to compute optical point-spread functions, both accurately and efficiently, under defocus and aberration conditions that seem to cover almost all cases of practical interest. Because of the analytical nature of the formulas, there are no discretization effects limiting the accuracy, as opposed to the more commonly used numerical packages based on strictly numerical integration methods. Instead, we have an easily managed criterion, expressed in the number of terms to be included in the Bessel-series representations, guaranteeing the desired accuracy. For this reason, the analytical method can also serve as a calibration tool for the numerically based methods. The analysis is not limited to pointlike objects but can also be used for extended objects under various illumination conditions. The calculation schemes are simple and permit one to trace the relative strength of the various interfering complex-amplitude terms that contribute to the final image intensity function.

  3. An accurate cost effective DFT approach to study the sensing behaviour of polypyrrole towards nitrate ions in gas and aqueous phases.

    Science.gov (United States)

    Wasim, Fatima; Mahmood, Tariq; Ayub, Khurshid

    2016-07-28

    Density functional theory (DFT) calculations have been performed to study the response of polypyrrole towards nitrate ions in gas and aqueous phases. First, an accurate estimate of interaction energies is obtained by methods calibrated against the gold standard CCSD(T) method. Then, a number of low cost DFT methods are also evaluated for their ability to accurately estimate the binding energies of polymer-nitrate complexes. The low cost methods evaluated here include dispersion corrected potential (DCP), Grimme's D3 correction, counterpoise correction of the B3LYP method, and Minnesota functionals (M05-2X). The interaction energies calculated using the counterpoise (CP) correction and DCP methods at the B3LYP level are in better agreement with the interaction energies calculated using the calibrated methods. The interaction energies of an infinite polymer (polypyrrole) with nitrate ions are calculated by a variety of low cost methods in order to find the associated errors. The electronic and spectroscopic properties of polypyrrole oligomers nPy (where n = 1-9) and nPy-NO3(-) complexes are calculated, and then extrapolated for an infinite polymer through a second degree polynomial fit. Charge analysis, frontier molecular orbital (FMO) analysis and density of state studies also reveal the sensing ability of polypyrrole towards nitrate ions. Interaction energies, charge analysis and density of states analyses illustrate that the response of polypyrrole towards nitrate ions is considerably reduced in the aqueous medium (compared to the gas phase).

  4. ceRNAs in plants: computational approaches and associated challenges for target mimic research.

    Science.gov (United States)

    Paschoal, Alexandre Rossi; Lozada-Chávez, Irma; Domingues, Douglas Silva; Stadler, Peter F

    2017-05-30

    The competing endogenous RNA hypothesis has gained increasing attention as a potential global regulatory mechanism of microRNAs (miRNAs), and as a powerful tool to predict the function of many noncoding RNAs, including miRNAs themselves. Most studies have been focused on animals, although target mimic (TMs) discovery as well as important computational and experimental advances has been developed in plants over the past decade. Thus, our contribution summarizes recent progresses in computational approaches for research of miRNA:TM interactions. We divided this article in three main contributions. First, a general overview of research on TMs in plants is presented with practical descriptions of the available literature, tools, data, databases and computational reports. Second, we describe a common protocol for the computational and experimental analyses of TM. Third, we provide a bioinformatics approach for the prediction of TM motifs potentially cross-targeting both members within the same or from different miRNA families, based on the identification of consensus miRNA-binding sites from known TMs across sequenced genomes, transcriptomes and known miRNAs. This computational approach is promising because, in contrast to animals, miRNA families in plants are large with identical or similar members, several of which are also highly conserved. From the three consensus TM motifs found with our approach: MIM166, MIM171 and MIM159/319, the last one has found strong support on the recent experimental work by Reichel and Millar [Specificity of plant microRNA TMs: cross-targeting of mir159 and mir319. J Plant Physiol 2015;180:45-8]. Finally, we stress the discussion on the major computational and associated experimental challenges that have to be faced in future ceRNA studies. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. A New Approach to Practical Active-Secure Two-Party Computation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio

    2011-01-01

    We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao's garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...... a number of novel techniques for relating the outputs and inputs of OTs in a larger construction. We also report on an implementation of this approach, that shows that our protocol is more efficient than any previous one: For big enough circuits, we can evaluate more than 20000 Boolean gates per second...

  6. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    Science.gov (United States)

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  7. Current Trend Towards Using Soft Computing Approaches to Phase Synchronization in Communication Systems

    Science.gov (United States)

    Drake, Jeffrey T.; Prasad, Nadipuram R.

    1999-01-01

    This paper surveys recent advances in communications that utilize soft computing approaches to phase synchronization. Soft computing, as opposed to hard computing, is a collection of complementary methodologies that act in producing the most desirable control, decision, or estimation strategies. Recently, the communications area has explored the use of the principal constituents of soft computing, namely, fuzzy logic, neural networks, and genetic algorithms, for modeling, control, and most recently for the estimation of phase in phase-coherent communications. If the receiver in a digital communications system is phase-coherent, as is often the case, phase synchronization is required. Synchronization thus requires estimation and/or control at the receiver of an unknown or random phase offset.

  8. Wavelets-Computational Aspects of Sterian Realistic Approach to Uncertainty Principle in High Energy Physics: A Transient Approach

    Directory of Open Access Journals (Sweden)

    Cristian Toma

    2013-01-01

    Full Text Available This study presents wavelets-computational aspects of Sterian-realistic approach to uncertainty principle in high energy physics. According to this approach, one cannot make a device for the simultaneous measuring of the canonical conjugate variables in reciprocal Fourier spaces. However, such aspects regarding the use of conjugate Fourier spaces can be also noticed in quantum field theory, where the position representation of a quantum wave is replaced by momentum representation before computing the interaction in a certain point of space, at a certain moment of time. For this reason, certain properties regarding the switch from one representation to another in these conjugate Fourier spaces should be established. It is shown that the best results can be obtained using wavelets aspects and support macroscopic functions for computing (i wave-train nonlinear relativistic transformation, (ii reflection/refraction with a constant shift, (iii diffraction considered as interaction with a null phase shift without annihilation of associated wave, (iv deflection by external electromagnetic fields without phase loss, and (v annihilation of associated wave-train through fast and spatially extended phenomena according to uncertainty principle.

  9. Dystrophic calcification in muscles of legs in calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasia syndrome: Accurate evaluation of the extent with 99mTc-methylene diphosphonate single photon emission computed tomography/computed tomography

    International Nuclear Information System (INIS)

    Chakraborty, Partha Sarathi; Karunanithi, Sellam; Dhull, Varun Singh; Kumar, Kunal; Tripathi, Madhavi

    2015-01-01

    We present the case of a 35-year-old man with calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly and telangiectasia variant scleroderma who presented with dysphagia, Raynaud's phenomenon and calf pain. 99m Tc-methylene diphosphonate bone scintigraphy was performed to identify the extent of the calcification. It revealed extensive dystrophic calcification in the left thigh and bilateral legs which was involving the muscles and was well-delineated on single photon emission computed tomography/computed tomography. Calcinosis in scleroderma usually involves the skin but can be found in deeper periarticular tissues. Myopathy is associated with a poor prognosis

  10. Templet Web: the use of volunteer computing approach in PaaS-style cloud

    Science.gov (United States)

    Vostokin, Sergei; Artamonov, Yuriy; Tsarev, Daniil

    2018-03-01

    This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a) the implementation of "on-demand" access; (b) source code deployment management; (c) high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.

  11. A Model for the Acceptance of Cloud Computing Technology Using DEMATEL Technique and System Dynamics Approach

    Directory of Open Access Journals (Sweden)

    seyyed mohammad zargar

    2018-03-01

    Full Text Available Cloud computing is a new method to provide computing resources and increase computing power in organizations. Despite the many benefits this method shares, it has not been universally used because of some obstacles including security issues and has become a concern for IT managers in organization. In this paper, the general definition of cloud computing is presented. In addition, having reviewed previous studies, the researchers identified effective variables on technology acceptance and, especially, cloud computing technology. Then, using DEMATEL technique, the effectiveness and permeability of the variable were determined. The researchers also designed a model to show the existing dynamics in cloud computing technology using system dynamics approach. The validity of the model was confirmed through evaluation methods in dynamics model by using VENSIM software. Finally, based on different conditions of the proposed model, a variety of scenarios were designed. Then, the implementation of these scenarios was simulated within the proposed model. The results showed that any increase in data security, government support and user training can lead to the increase in the adoption and use of cloud computing technology.

  12. Templet Web: the use of volunteer computing approach in PaaS-style cloud

    Directory of Open Access Journals (Sweden)

    Vostokin Sergei

    2018-03-01

    Full Text Available This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a the implementation of “on-demand” access; (b source code deployment management; (c high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.

  13. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  14. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    Science.gov (United States)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  15. Probability of extreme interference levels computed from reliability approaches: application to transmission lines with uncertain parameters

    International Nuclear Information System (INIS)

    Larbi, M.; Besnier, P.; Pecqueux, B.

    2014-01-01

    This paper deals with the risk analysis of an EMC default using a statistical approach. It is based on reliability methods from probabilistic engineering mechanics. A computation of probability of failure (i.e. probability of exceeding a threshold) of an induced current by crosstalk is established by taking into account uncertainties on input parameters influencing levels of interference in the context of transmission lines. The study has allowed us to evaluate the probability of failure of the induced current by using reliability methods having a relative low computational cost compared to Monte Carlo simulation. (authors)

  16. Stochastic approach for round-off error analysis in computing application to signal processing algorithms

    International Nuclear Information System (INIS)

    Vignes, J.

    1986-01-01

    Any result of algorithms provided by a computer always contains an error resulting from floating-point arithmetic round-off error propagation. Furthermore signal processing algorithms are also generally performed with data containing errors. The permutation-perturbation method, also known under the name CESTAC (controle et estimation stochastique d'arrondi de calcul) is a very efficient practical method for evaluating these errors and consequently for estimating the exact significant decimal figures of any result of algorithms performed on a computer. The stochastic approach of this method, its probabilistic proof, and the perfect agreement between the theoretical and practical aspects are described in this paper [fr

  17. Information Technology Service Management with Cloud Computing Approach to Improve Administration System and Online Learning Performance

    Directory of Open Access Journals (Sweden)

    Wilianto Wilianto

    2015-10-01

    Full Text Available This work discusses the development of information technology service management using cloud computing approach to improve the performance of administration system and online learning at STMIK IBBI Medan, Indonesia. The network topology is modeled and simulated for system administration and online learning. The same network topology is developed in cloud computing using Amazon AWS architecture. The model is designed and modeled using Riverbed Academic Edition Modeler to obtain values of the parameters: delay, load, CPU utilization, and throughput. The simu- lation results are the following. For network topology 1, without cloud computing, the average delay is 54  ms, load 110 000 bits/s, CPU utilization 1.1%, and throughput 440  bits/s.  With  cloud  computing,  the  average  delay  is 45 ms,  load  2 800  bits/s,  CPU  utilization  0.03%,  and throughput 540 bits/s. For network topology 2, without cloud computing, the average delay is 39  ms, load 3 500 bits/s, CPU utilization 0.02%, and throughput database server 1 400 bits/s. With cloud computing, the average delay is 26 ms, load 5 400 bits/s, CPU utilization email server 0.0001%, FTP server 0.001%, HTTP server 0.0002%, throughput email server 85 bits/s, FTP    server 100 bits/sec, and HTTP server 95  bits/s.  Thus,  the  delay, the load, and the CPU utilization decrease; but,  the throughput increases. Information technology service management with cloud computing approach has better performance.

  18. Computational intelligence approach for NOx emissions minimization in a coal-fired utility boiler

    International Nuclear Information System (INIS)

    Zhou Hao; Zheng Ligang; Cen Kefa

    2010-01-01

    The current work presented a computational intelligence approach used for minimizing NO x emissions in a 300 MW dual-furnaces coal-fired utility boiler. The fundamental idea behind this work included NO x emissions characteristics modeling and NO x emissions optimization. First, an objective function aiming at estimating NO x emissions characteristics from nineteen operating parameters of the studied boiler was represented by a support vector regression (SVR) model. Second, four levels of primary air velocities (PA) and six levels of secondary air velocities (SA) were regulated by using particle swarm optimization (PSO) so as to achieve low NO x emissions combustion. To reduce the time demanding, a more flexible stopping condition was used to improve the computational efficiency without the loss of the quality of the optimization results. The results showed that the proposed approach provided an effective way to reduce NO x emissions from 399.7 ppm to 269.3 ppm, which was much better than a genetic algorithm (GA) based method and was slightly better than an ant colony optimization (ACO) based approach reported in the earlier work. The main advantage of PSO was that the computational cost, typical of less than 25 s under a PC system, is much less than those required for ACO. This meant the proposed approach would be more applicable to online and real-time applications for NO x emissions minimization in actual power plant boilers.

  19. An Approach for Indoor Path Computation among Obstacles that Considers User Dimension

    Directory of Open Access Journals (Sweden)

    Liu Liu

    2015-12-01

    Full Text Available People often transport objects within indoor environments, who need enough space for the motion. In such cases, the accessibility of indoor spaces relies on the dimensions, which includes a person and her/his operated objects. This paper proposes a new approach to avoid obstacles and compute indoor paths with respect to the user dimension. The approach excludes inaccessible spaces for a user in five steps: (1 compute the minimum distance between obstacles and find the inaccessible gaps; (2 group obstacles according to the inaccessible gaps; (3 identify groups of obstacles that influence the path between two locations; (4 compute boundaries for the selected groups; and (5 build a network in the accessible area around the obstacles in the room. Compared to the Minkowski sum method for outlining inaccessible spaces, the proposed approach generates simpler polygons for groups of obstacles that do not contain inner rings. The creation of a navigation network becomes easier based on these simple polygons. By using this approach, we can create user- and task-specific networks in advance. Alternatively, the accessible path can be generated on the fly before the user enters a room.

  20. Radiological management of blunt polytrauma with computed tomography and angiography: an integrated approach

    Energy Technology Data Exchange (ETDEWEB)

    Kurdziel, J.C.; Dondelinger, R.F.; Hemmer, M.

    1987-01-01

    107 polytraumatized patients, who had experienced blunt trauma have been worked up at admission with computed tomography of the thorax, abdomen and pelvis following computed tomography study of the brain: significant lesions were revealed in 98 (90%) patients. 79 (74%) patients showed trauma to the thorax, in 69 (64%) patients abdominal or pelvic trauma was evidenced. No false positive diagnosis was established. 5 traumatic findings were missed. Emergency angiography was indicated in 3 (3%) patients, following computed tomography examination. 3 other trauma patients were submitted directly to angiography without computed tomography examination during the time period this study was completed. Embolization was carried out in 5/6 patients. No thoracotomy was needed. 13 (12%) patients underwent laparotomy following computed tomography. Overall mortality during hospital stay was 14% (15/107). No patient died from visceral bleeding. Conservative management of blunt polytrauma patients can be advocated in almost 90% of visceral lesions. Computed tomography coupled with angiography and embolization represent an adequate integrated approach to the management of blunt polytrauma patients.

  1. Radiological management of blunt polytrauma with computed tomography and angiography: an integrated approach

    International Nuclear Information System (INIS)

    Kurdziel, J.C.; Dondelinger, R.F.; Hemmer, M.

    1987-01-01

    107 polytraumatized patients, who had experienced blunt trauma have been worked up at admission with computed tomography of the thorax, abdomen and pelvis following computed tomography study of the brain: significant lesions were revealed in 98 (90%) patients. 79 (74%) patients showed trauma to the thorax, in 69 (64%) patients abdominal or pelvic trauma was evidenced. No false positive diagnosis was established. 5 traumatic findings were missed. Emergency angiography was indicated in 3 (3%) patients, following computed tomography examination. 3 other trauma patients were submitted directly to angiography without computed tomography examination during the time period this study was completed. Embolization was carried out in 5/6 patients. No thoracotomy was needed. 13 (12%) patients underwent laparotomy following computed tomography. Overall mortality during hospital stay was 14% (15/107). No patient died from visceral bleeding. Conservative management of blunt polytrauma patients can be advocated in almost 90% of visceral lesions. Computed tomography coupled with angiography and embolization represent an adequate integrated approach to the management of blunt polytrauma patients

  2. An Accurate CT Saturation Classification Using a Deep Learning Approach Based on Unsupervised Feature Extraction and Supervised Fine-Tuning Strategy

    Directory of Open Access Journals (Sweden)

    Muhammad Ali

    2017-11-01

    Full Text Available Current transformer (CT saturation is one of the significant problems for protection engineers. If CT saturation is not tackled properly, it can cause a disastrous effect on the stability of the power system, and may even create a complete blackout. To cope with CT saturation properly, an accurate detection or classification should be preceded. Recently, deep learning (DL methods have brought a subversive revolution in the field of artificial intelligence (AI. This paper presents a new DL classification method based on unsupervised feature extraction and supervised fine-tuning strategy to classify the saturated and unsaturated regions in case of CT saturation. In other words, if protection system is subjected to a CT saturation, proposed method will correctly classify the different levels of saturation with a high accuracy. Traditional AI methods are mostly based on supervised learning and rely heavily on human crafted features. This paper contributes to an unsupervised feature extraction, using autoencoders and deep neural networks (DNNs to extract features automatically without prior knowledge of optimal features. To validate the effectiveness of proposed method, a variety of simulation tests are conducted, and classification results are analyzed using standard classification metrics. Simulation results confirm that proposed method classifies the different levels of CT saturation with a remarkable accuracy and has unique feature extraction capabilities. Lastly, we provided a potential future research direction to conclude this paper.

  3. Comparison of Computational Approaches for Rapid Aerodynamic Assessment of Small UAVs

    Science.gov (United States)

    Shafer, Theresa C.; Lynch, C. Eric; Viken, Sally A.; Favaregh, Noah; Zeune, Cale; Williams, Nathan; Dansie, Jonathan

    2014-01-01

    Computational Fluid Dynamic (CFD) methods were used to determine the basic aerodynamic, performance, and stability and control characteristics of the unmanned air vehicle (UAV), Kahu. Accurate and timely prediction of the aerodynamic characteristics of small UAVs is an essential part of military system acquisition and air-worthiness evaluations. The forces and moments of the UAV were predicted using a variety of analytical methods for a range of configurations and conditions. The methods included Navier Stokes (N-S) flow solvers (USM3D, Kestrel and Cobalt) that take days to set up and hours to converge on a single solution; potential flow methods (PMARC, LSAERO, and XFLR5) that take hours to set up and minutes to compute; empirical methods (Datcom) that involve table lookups and produce a solution quickly; and handbook calculations. A preliminary aerodynamic database can be developed very efficiently by using a combination of computational tools. The database can be generated with low-order and empirical methods in linear regions, then replacing or adjusting the data as predictions from higher order methods are obtained. A comparison of results from all the data sources as well as experimental data obtained from a wind-tunnel test will be shown and the methods will be evaluated on their utility during each portion of the flight envelope.

  4. Computational Approach for Securing Radiology-Diagnostic Data in Connected Health Network using High-Performance GPU-Accelerated AES.

    Science.gov (United States)

    Adeshina, A M; Hashim, R

    2017-03-01

    Diagnostic radiology is a core and integral part of modern medicine, paving ways for the primary care physicians in the disease diagnoses, treatments and therapy managements. Obviously, all recent standard healthcare procedures have immensely benefitted from the contemporary information technology revolutions, apparently revolutionizing those approaches to acquiring, storing and sharing of diagnostic data for efficient and timely diagnosis of diseases. Connected health network was introduced as an alternative to the ageing traditional concept in healthcare system, improving hospital-physician connectivity and clinical collaborations. Undoubtedly, the modern medicinal approach has drastically improved healthcare but at the expense of high computational cost and possible breach of diagnosis privacy. Consequently, a number of cryptographical techniques are recently being applied to clinical applications, but the challenges of not being able to successfully encrypt both the image and the textual data persist. Furthermore, processing time of encryption-decryption of medical datasets, within a considerable lower computational cost without jeopardizing the required security strength of the encryption algorithm, still remains as an outstanding issue. This study proposes a secured radiology-diagnostic data framework for connected health network using high-performance GPU-accelerated Advanced Encryption Standard. The study was evaluated with radiology image datasets consisting of brain MR and CT datasets obtained from the department of Surgery, University of North Carolina, USA, and the Swedish National Infrastructure for Computing. Sample patients' notes from the University of North Carolina, School of medicine at Chapel Hill were also used to evaluate the framework for its strength in encrypting-decrypting textual data in the form of medical report. Significantly, the framework is not only able to accurately encrypt and decrypt medical image datasets, but it also

  5. A direct approach to fault-tolerance in measurement-based quantum computation via teleportation

    International Nuclear Information System (INIS)

    Silva, Marcus; Danos, Vincent; Kashefi, Elham; Ollivier, Harold

    2007-01-01

    We discuss a simple variant of the one-way quantum computing model (Raussendorf R and Briegel H-J 2001 Phys. Rev. Lett. 86 5188), called the Pauli measurement model, where measurements are restricted to be along the eigenbases of the Pauli X and Y operators, while qubits can be initially prepared both in the vertical bar + π/4 > := 1/√2( vertical bar 0> + e i(π/4) vertical bar 1>) state and the usual vertical bar +> := 1/√2 ( vertical bar 0 > + vertical bar 1>) state. We prove the universality of this quantum computation model, and establish a standardization procedure which permits all entanglement and state preparation to be performed at the beginning of computation. This leads us to develop a direct approach to fault-tolerance by simple transformations of the entanglement graph and preparation operations, while error correction is performed naturally via syndrome-extracting teleportations

  6. Computational Approach for Studying Optical Properties of DNA Systems in Solution

    DEFF Research Database (Denmark)

    Nørby, Morten Steen; Svendsen, Casper Steinmann; Olsen, Jógvan Magnus Haugaard

    2016-01-01

    In this paper we present a study of the methodological aspects regarding calculations of optical properties for DNA systems in solution. Our computational approach will be built upon a fully polarizable QM/MM/Continuum model within a damped linear response theory framework. In this approach...... the environment is given a highly advanced description in terms of the electrostatic potential through the polarizable embedding model. Furthermore, bulk solvent effects are included in an efficient manner through a conductor-like screening model. With the aim of reducing the computational cost we develop a set...... of averaged partial charges and distributed isotropic dipole-dipole polarizabilities for DNA suitable for describing the classical region in ground-state and excited-state calculations. Calculations of the UV-spectrum of the 2-aminopurine optical probe embedded in a DNA double helical structure are presented...

  7. Approach and tool for computer animation of fields in electrical apparatus

    International Nuclear Information System (INIS)

    Miltchev, Radoslav; Yatchev, Ivan S.; Ritchie, Ewen

    2002-01-01

    The paper presents a technical approach and post-processing tool for creating and displaying computer animation. The approach enables handling of two- and three-dimensional physical field phenomena results obtained from finite element software or to display movement processes in electrical apparatus simulations. The main goal of this work is to extend auxiliary features built in general-purpose CAD software working in the Windows environment. Different storage techniques were examined and the one employing image capturing was chosen. The developed tool provides benefits of independent visualisation, creating scenarios and facilities for exporting animations in common file fon-nats for distribution on different computer platforms. It also provides a valuable educational tool.(Author)

  8. An efficient approach for computing the geometrical optics field reflected from a numerically specified surface

    Science.gov (United States)

    Mittra, R.; Rushdi, A.

    1979-01-01

    An approach for computing the geometrical optic fields reflected from a numerically specified surface is presented. The approach includes the step of deriving a specular point and begins with computing the reflected rays off the surface at the points where their coordinates, as well as the partial derivatives (or equivalently, the direction of the normal), are numerically specified. Then, a cluster of three adjacent rays are chosen to define a 'mean ray' and the divergence factor associated with this mean ray. Finally, the ampilitude, phase, and vector direction of the reflected field at a given observation point are derived by associating this point with the nearest mean ray and determining its position relative to such a ray.

  9. Seismic safety margins research program. Phase I. Project VII: systems analysis specifications of computational approach

    International Nuclear Information System (INIS)

    Collins, J.D.; Hudson, J.M.; Chrostowski, J.D.

    1979-02-01

    A computational methodology is presented for the prediction of core melt probabilities in a nuclear power plant due to earthquake events. The proposed model has four modules: seismic hazard, structural dynamic (including soil-structure interaction), component failure and core melt sequence. The proposed modules would operate in series and would not have to be operated at the same time. The basic statistical approach uses a Monte Carlo simulation to treat random and systematic error but alternate statistical approaches are permitted by the program design

  10. A Context-Aware Ubiquitous Learning Approach for Providing Instant Learning Support in Personal Computer Assembly Activities

    Science.gov (United States)

    Hsu, Ching-Kun; Hwang, Gwo-Jen

    2014-01-01

    Personal computer assembly courses have been recognized as being essential in helping students understand computer structure as well as the functionality of each computer component. In this study, a context-aware ubiquitous learning approach is proposed for providing instant assistance to individual students in the learning activity of a…

  11. Computational Design and Discovery of Ni-Based Alloys and Coatings: Thermodynamic Approaches Validated by Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zi-Kui [Pennsylvania State University; Gleeson, Brian [University of Pittsburgh; Shang, Shunli [Pennsylvania State University; Gheno, Thomas [University of Pittsburgh; Lindwall, Greta [Pennsylvania State University; Zhou, Bi-Cheng [Pennsylvania State University; Liu, Xuan [Pennsylvania State University; Ross, Austin [Pennsylvania State University

    2018-04-23

    This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities, which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.

  12. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    Science.gov (United States)

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  13. A Stochastic Approach for Blurred Image Restoration and Optical Flow Computation on Field Image Sequence

    Institute of Scientific and Technical Information of China (English)

    高文; 陈熙霖

    1997-01-01

    The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.

  14. Discovery and Development of ATP-Competitive mTOR Inhibitors Using Computational Approaches.

    Science.gov (United States)

    Luo, Yao; Wang, Ling

    2017-11-16

    The mammalian target of rapamycin (mTOR) is a central controller of cell growth, proliferation, metabolism, and angiogenesis. This protein is an attractive target for new anticancer drug development. Significant progress has been made in hit discovery, lead optimization, drug candidate development and determination of the three-dimensional (3D) structure of mTOR. Computational methods have been applied to accelerate the discovery and development of mTOR inhibitors helping to model the structure of mTOR, screen compound databases, uncover structure-activity relationship (SAR) and optimize the hits, mine the privileged fragments and design focused libraries. Besides, computational approaches were also applied to study protein-ligand interactions mechanisms and in natural product-driven drug discovery. Herein, we survey the most recent progress on the application of computational approaches to advance the discovery and development of compounds targeting mTOR. Future directions in the discovery of new mTOR inhibitors using computational methods are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Combined computational and experimental approach to improve the assessment of mitral regurgitation by echocardiography.

    Science.gov (United States)

    Sonntag, Simon J; Li, Wei; Becker, Michael; Kaestner, Wiebke; Büsen, Martin R; Marx, Nikolaus; Merhof, Dorit; Steinseifer, Ulrich

    2014-05-01

    Mitral regurgitation (MR) is one of the most frequent valvular heart diseases. To assess MR severity, color Doppler imaging (CDI) is the clinical standard. However, inadequate reliability, poor reproducibility and heavy user-dependence are known limitations. A novel approach combining computational and experimental methods is currently under development aiming to improve the quantification. A flow chamber for a circulatory flow loop was developed. Three different orifices were used to mimic variations of MR. The flow field was recorded simultaneously by a 2D Doppler ultrasound transducer and Particle Image Velocimetry (PIV). Computational Fluid Dynamics (CFD) simulations were conducted using the same geometry and boundary conditions. The resulting computed velocity field was used to simulate synthetic Doppler signals. Comparison between PIV and CFD shows a high level of agreement. The simulated CDI exhibits the same characteristics as the recorded color Doppler images. The feasibility of the proposed combination of experimental and computational methods for the investigation of MR is shown and the numerical methods are successfully validated against the experiments. Furthermore, it is discussed how the approach can be used in the long run as a platform to improve the assessment of MR quantification.

  16. Computational enzyme design approaches with significant biological outcomes: progress and challenges

    OpenAIRE

    Li, Xiaoman; Zhang, Ziding; Song, Jiangning

    2012-01-01

    Enzymes are powerful biocatalysts, however, so far there is still a large gap between the number of enzyme-based practical applications and that of naturally occurring enzymes. Multiple experimental approaches have been applied to generate nearly all possible mutations of target enzymes, allowing the identification of desirable variants with improved properties to meet the practical needs. Meanwhile, an increasing number of computational methods have been developed to assist in the modificati...

  17. Reducing usage of the computational resources by event driven approach to model predictive control

    Science.gov (United States)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  18. A hybrid computational approach to estimate solar global radiation: An empirical evidence from Iran

    International Nuclear Information System (INIS)

    Mostafavi, Elham Sadat; Ramiyani, Sara Saeidi; Sarvar, Rahim; Moud, Hashem Izadi; Mousavi, Seyyed Mohammad

    2013-01-01

    This paper presents an innovative hybrid approach for the estimation of the solar global radiation. New prediction equations were developed for the global radiation using an integrated search method of genetic programming (GP) and simulated annealing (SA), called GP/SA. The solar radiation was formulated in terms of several climatological and meteorological parameters. Comprehensive databases containing monthly data collected for 6 years in two cities of Iran were used to develop GP/SA-based models. Separate models were established for each city. The generalization of the models was verified using a separate testing database. A sensitivity analysis was conducted to investigate the contribution of the parameters affecting the solar radiation. The derived models make accurate predictions of the solar global radiation and notably outperform the existing models. -- Highlights: ► A hybrid approach is presented for the estimation of the solar global radiation. ► The proposed method integrates the capabilities of GP and SA. ► Several climatological and meteorological parameters are included in the analysis. ► The GP/SA models make accurate predictions of the solar global radiation.

  19. From computer-assisted intervention research to clinical impact: The need for a holistic approach.

    Science.gov (United States)

    Ourselin, Sébastien; Emberton, Mark; Vercauteren, Tom

    2016-10-01

    The early days of the field of medical image computing (MIC) and computer-assisted intervention (CAI), when publishing a strong self-contained methodological algorithm was enough to produce impact, are over. As a community, we now have substantial responsibility to translate our scientific progresses into improved patient care. In the field of computer-assisted interventions, the emphasis is also shifting from the mere use of well-known established imaging modalities and position trackers to the design and combination of innovative sensing, elaborate computational models and fine-grained clinical workflow analysis to create devices with unprecedented capabilities. The barriers to translating such devices in the complex and understandably heavily regulated surgical and interventional environment can seem daunting. Whether we leave the translation task mostly to our industrial partners or welcome, as researchers, an important share of it is up to us. We argue that embracing the complexity of surgical and interventional sciences is mandatory to the evolution of the field. Being able to do so requires large-scale infrastructure and a critical mass of expertise that very few research centres have. In this paper, we emphasise the need for a holistic approach to computer-assisted interventions where clinical, scientific, engineering and regulatory expertise are combined as a means of moving towards clinical impact. To ensure that the breadth of infrastructure and expertise required for translational computer-assisted intervention research does not lead to a situation where the field advances only thanks to a handful of exceptionally large research centres, we also advocate that solutions need to be designed to lower the barriers to entry. Inspired by fields such as particle physics and astronomy, we claim that centralised very large innovation centres with state of the art technology and health technology assessment capabilities backed by core support staff and open

  20. Targeted intervention: Computational approaches to elucidate and predict relapse in alcoholism.

    Science.gov (United States)

    Heinz, Andreas; Deserno, Lorenz; Zimmermann, Ulrich S; Smolka, Michael N; Beck, Anne; Schlagenhauf, Florian

    2017-05-01

    Alcohol use disorder (AUD) and addiction in general is characterized by failures of choice resulting in repeated drug intake despite severe negative consequences. Behavioral change is hard to accomplish and relapse after detoxification is common and can be promoted by consumption of small amounts of alcohol as well as exposure to alcohol-associated cues or stress. While those environmental factors contributing to relapse have long been identified, the underlying psychological and neurobiological mechanism on which those factors act are to date incompletely understood. Based on the reinforcing effects of drugs of abuse, animal experiments showed that drug, cue and stress exposure affect Pavlovian and instrumental learning processes, which can increase salience of drug cues and promote habitual drug intake. In humans, computational approaches can help to quantify changes in key learning mechanisms during the development and maintenance of alcohol dependence, e.g. by using sequential decision making in combination with computational modeling to elucidate individual differences in model-free versus more complex, model-based learning strategies and their neurobiological correlates such as prediction error signaling in fronto-striatal circuits. Computational models can also help to explain how alcohol-associated cues trigger relapse: mechanisms such as Pavlovian-to-Instrumental Transfer can quantify to which degree Pavlovian conditioned stimuli can facilitate approach behavior including alcohol seeking and intake. By using generative models of behavioral and neural data, computational approaches can help to quantify individual differences in psychophysiological mechanisms that underlie the development and maintenance of AUD and thus promote targeted intervention. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Fourier-based approach to interpolation in single-slice helical computed tomography

    International Nuclear Information System (INIS)

    La Riviere, Patrick J.; Pan Xiaochuan

    2001-01-01

    It has recently been shown that longitudinal aliasing can be a significant and detrimental presence in reconstructed single-slice helical computed tomography (CT) volumes. This aliasing arises because the directly measured data in helical CT are generally undersampled by a factor of at least 2 in the longitudinal direction and because the exploitation of the redundancy of fanbeam data acquired over 360 degree sign to generate additional longitudinal samples does not automatically eliminate the aliasing. In this paper we demonstrate that for pitches near 1 or lower, the redundant fanbeam data, when used properly, can provide sufficient information to satisfy a generalized sampling theorem and thus to eliminate aliasing. We develop and evaluate a Fourier-based algorithm, called 180FT, that accomplishes this. As background we present a second Fourier-based approach, called 360FT, that makes use only of the directly measured data. Both Fourier-based approaches exploit the fast Fourier transform and the Fourier shift theorem to generate from the helical projection data a set of fanbeam sinograms corresponding to equispaced transverse slices. Slice-by-slice reconstruction is then performed by use of two-dimensional fanbeam algorithms. The proposed approaches are compared to their counterparts based on the use of linear interpolation - the 360LI and 180LI approaches. The aliasing suppression property of the 180FT approach is a clear advantage of the approach and represents a step toward the desirable goal of achieving uniform longitudinal resolution properties in reconstructed helical CT volumes

  2. Allele Age Under Non-Classical Assumptions is Clarified by an Exact Computational Markov Chain Approach.

    Science.gov (United States)

    De Sanctis, Bianca; Krukov, Ivan; de Koning, A P Jason

    2017-09-19

    Determination of the age of an allele based on its population frequency is a well-studied problem in population genetics, for which a variety of approximations have been proposed. We present a new result that, surprisingly, allows the expectation and variance of allele age to be computed exactly (within machine precision) for any finite absorbing Markov chain model in a matter of seconds. This approach makes none of the classical assumptions (e.g., weak selection, reversibility, infinite sites), exploits modern sparse linear algebra techniques, integrates over all sample paths, and is rapidly computable for Wright-Fisher populations up to N e  = 100,000. With this approach, we study the joint effect of recurrent mutation, dominance, and selection, and demonstrate new examples of "selective strolls" where the classical symmetry of allele age with respect to selection is violated by weakly selected alleles that are older than neutral alleles at the same frequency. We also show evidence for a strong age imbalance, where rare deleterious alleles are expected to be substantially older than advantageous alleles observed at the same frequency when population-scaled mutation rates are large. These results highlight the under-appreciated utility of computational methods for the direct analysis of Markov chain models in population genetics.

  3. Computational Approaches for Prediction of Pathogen-Host Protein-Protein Interactions

    Directory of Open Access Journals (Sweden)

    Esmaeil eNourani

    2015-02-01

    Full Text Available Infectious diseases are still among the major and prevalent health problems, mostly because of the drug resistance of novel variants of pathogens. Molecular interactions between pathogens and their hosts are the key part of the infection mechanisms. Novel antimicrobial therapeutics to fight drug resistance is only possible in case of a thorough understanding of pathogen-host interaction (PHI systems. Existing databases, which contain experimentally verified PHI data, suffer from scarcity of reported interactions due to the technically challenging and time consuming process of experiments. This has motivated many researchers to address the problem by proposing computational approaches for analysis and prediction of PHIs. The computational methods primarily utilize sequence information, protein structure and known interactions. Classic machine learning techniques are used when there are sufficient known interactions to be used as training data. On the opposite case, transfer and multi task learning methods are preferred. Here, we present an overview of these computational approaches for PHI prediction, discussing their weakness and abilities, with future directions.

  4. Distributional and Knowledge-Based Approaches for Computing Portuguese Word Similarity

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be a reference for those interested in computing word similarity in Portuguese, presents several approaches for this task and is motivated by the recent availability of state-of-the-art distributional models of Portuguese words, which add to several lexical knowledge bases (LKBs for this language, available for a longer time. The previous resources were exploited to answer word similarity tests, which also became recently available for Portuguese. We conclude that there are several valid approaches for this task, but not one that outperforms all the others in every single test. Distributional models seem to capture relatedness better, while LKBs are better suited for computing genuine similarity, but, in general, better results are obtained when knowledge from different sources is combined.

  5. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    Directory of Open Access Journals (Sweden)

    Anyela Camargo

    Full Text Available Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  6. Low rank approach to computing first and higher order derivatives using automatic differentiation

    International Nuclear Information System (INIS)

    Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.

    2012-01-01

    This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computing derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)

  7. New Approaches to the Computer Simulation of Amorphous Alloys: A Review.

    Science.gov (United States)

    Valladares, Ariel A; Díaz-Celaya, Juan A; Galván-Colín, Jonathan; Mejía-Mendoza, Luis M; Reyes-Retana, José A; Valladares, Renela M; Valladares, Alexander; Alvarez-Ramirez, Fernando; Qu, Dongdong; Shen, Jun

    2011-04-13

    In this work we review our new methods to computer generate amorphous atomic topologies of several binary alloys: SiH, SiN, CN; binary systems based on group IV elements like SiC; the GeSe 2 chalcogenide; aluminum-based systems: AlN and AlSi, and the CuZr amorphous alloy. We use an ab initio approach based on density functionals and computationally thermally-randomized periodically-continued cells with at least 108 atoms. The computational thermal process to generate the amorphous alloys is the undermelt-quench approach, or one of its variants, that consists in linearly heating the samples to just below their melting (or liquidus) temperatures, and then linearly cooling them afterwards. These processes are carried out from initial crystalline conditions using short and long time steps. We find that a step four-times the default time step is adequate for most of the simulations. Radial distribution functions (partial and total) are calculated and compared whenever possible with experimental results, and the agreement is very good. For some materials we report studies of the effect of the topological disorder on their electronic and vibrational densities of states and on their optical properties.

  8. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    Science.gov (United States)

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  9. DIRProt: a computational approach for discriminating insecticide resistant proteins from non-resistant proteins.

    Science.gov (United States)

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Banchariya, Anjali; Rao, Atmakuri Ramakrishna

    2017-03-24

    Insecticide resistance is a major challenge for the control program of insect pests in the fields of crop protection, human and animal health etc. Resistance to different insecticides is conferred by the proteins encoded from certain class of genes of the insects. To distinguish the insecticide resistant proteins from non-resistant proteins, no computational tool is available till date. Thus, development of such a computational tool will be helpful in predicting the insecticide resistant proteins, which can be targeted for developing appropriate insecticides. Five different sets of feature viz., amino acid composition (AAC), di-peptide composition (DPC), pseudo amino acid composition (PAAC), composition-transition-distribution (CTD) and auto-correlation function (ACF) were used to map the protein sequences into numeric feature vectors. The encoded numeric vectors were then used as input in support vector machine (SVM) for classification of insecticide resistant and non-resistant proteins. Higher accuracies were obtained under RBF kernel than that of other kernels. Further, accuracies were observed to be higher for DPC feature set as compared to others. The proposed approach achieved an overall accuracy of >90% in discriminating resistant from non-resistant proteins. Further, the two classes of resistant proteins i.e., detoxification-based and target-based were discriminated from non-resistant proteins with >95% accuracy. Besides, >95% accuracy was also observed for discrimination of proteins involved in detoxification- and target-based resistance mechanisms. The proposed approach not only outperformed Blastp, PSI-Blast and Delta-Blast algorithms, but also achieved >92% accuracy while assessed using an independent dataset of 75 insecticide resistant proteins. This paper presents the first computational approach for discriminating the insecticide resistant proteins from non-resistant proteins. Based on the proposed approach, an online prediction server DIRProt has

  10. Soft computing approach to 3D lung nodule segmentation in CT.

    Science.gov (United States)

    Badura, P; Pietka, E

    2014-10-01

    This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Elucidating Ligand-Modulated Conformational Landscape of GPCRs Using Cloud-Computing Approaches.

    Science.gov (United States)

    Shukla, Diwakar; Lawrenz, Morgan; Pande, Vijay S

    2015-01-01

    G-protein-coupled receptors (GPCRs) are a versatile family of membrane-bound signaling proteins. Despite the recent successes in obtaining crystal structures of GPCRs, much needs to be learned about the conformational changes associated with their activation. Furthermore, the mechanism by which ligands modulate the activation of GPCRs has remained elusive. Molecular simulations provide a way of obtaining detailed an atomistic description of GPCR activation dynamics. However, simulating GPCR activation is challenging due to the long timescales involved and the associated challenge of gaining insights from the "Big" simulation datasets. Here, we demonstrate how cloud-computing approaches have been used to tackle these challenges and obtain insights into the activation mechanism of GPCRs. In particular, we review the use of Markov state model (MSM)-based sampling algorithms for sampling milliseconds of dynamics of a major drug target, the G-protein-coupled receptor β2-AR. MSMs of agonist and inverse agonist-bound β2-AR reveal multiple activation pathways and how ligands function via modulation of the ensemble of activation pathways. We target this ensemble of conformations with computer-aided drug design approaches, with the goal of designing drugs that interact more closely with diverse receptor states, for overall increased efficacy and specificity. We conclude by discussing how cloud-based approaches present a powerful and broadly available tool for studying the complex biological systems routinely. © 2015 Elsevier Inc. All rights reserved.

  12. Tailor-made Design of Chemical Blends using Decomposition-based Computer-aided Approach

    DEFF Research Database (Denmark)

    Yunus, Nor Alafiza; Manan, Zainuddin Abd.; Gernaey, Krist

    (properties). In this way, first the systematic computer-aided technique establishes the search space, and then narrows it down in subsequent steps until a small number of feasible and promising candidates remain and then experimental work may be conducted to verify if any or all the candidates satisfy......Computer aided technique is an efficient approach to solve chemical product design problems such as design of blended liquid products (chemical blending). In chemical blending, one tries to find the best candidate, which satisfies the product targets defined in terms of desired product attributes...... is decomposed into two stages. The first stage investigates the mixture stability where all unstable mixtures are eliminated and the stable blend candidates are retained for further testing. In the second stage, the blend candidates have to satisfy a set of target properties that are ranked according...

  13. Solubility of magnetite in high temperature water and an approach to generalized solubility computations

    International Nuclear Information System (INIS)

    Dinov, K.; Ishigure, K.; Matsuura, C.; Hiroishi, D.

    1993-01-01

    Magnetite solubility in pure water was measured at 423 K in a fully teflon-covered autoclave system. A fairly good agreement was found to exist between the experimental data and calculation results obtained from the thermodynamical model, based on the assumption of Fe 3 O 4 dissolution and Fe 2 O 3 deposition reactions. A generalized thermodynamical approach to the solubility computations under complex conditions on the basis of minimization of the total system Gibbs free energy was proposed. The forms of the chemical equilibria were obtained for various systems initially defined and successfully justified by the subsequent computations. A [Fe 3+ ] T -[Fe 2+ ] T phase diagram was introduced as a tool for systematic understanding of the magnetite dissolution phenomena in pure water and under oxidizing and reducing conditions. (orig.)

  14. A Hybrid Autonomic Computing-Based Approach to Distributed Constraint Satisfaction Problems

    Directory of Open Access Journals (Sweden)

    Abhishek Bhatia

    2015-03-01

    Full Text Available Distributed constraint satisfaction problems (DisCSPs are among the widely endeavored problems using agent-based simulation. Fernandez et al. formulated sensor and mobile tracking problem as a DisCSP, known as SensorDCSP In this paper, we adopt a customized ERE (environment, reactive rules and entities algorithm for the SensorDCSP, which is otherwise proven as a computationally intractable problem. An amalgamation of the autonomy-oriented computing (AOC-based algorithm (ERE and genetic algorithm (GA provides an early solution of the modeled DisCSP. Incorporation of GA into ERE facilitates auto-tuning of the simulation parameters, thereby leading to an early solution of constraint satisfaction. This study further contributes towards a model, built up in the NetLogo simulation environment, to infer the efficacy of the proposed approach.

  15. A ground-up approach to High Throughput Cloud Computing in High-Energy Physics

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00245123; Ganis, Gerardo; Bagnasco, Stefano

    The thesis explores various practical approaches in making existing High Throughput computing applications common in High Energy Physics work on cloud-provided resources, as well as opening the possibility for running new applications. The work is divided into two parts: firstly we describe the work done at the computing facility hosted by INFN Torino to entirely convert former Grid resources into cloud ones, eventually running Grid use cases on top along with many others in a more flexible way. Integration and conversion problems are duly described. The second part covers the development of solutions for automatizing the orchestration of cloud workers based on the load of a batch queue and the development of HEP applications based on ROOT's PROOF that can adapt at runtime to a changing number of workers.

  16. An innovative computationally efficient hydromechanical coupling approach for fault reactivation in geological subsurface utilization

    Science.gov (United States)

    Adams, M.; Kempka, T.; Chabab, E.; Ziegler, M.

    2018-02-01

    Estimating the efficiency and sustainability of geological subsurface utilization, i.e., Carbon Capture and Storage (CCS) requires an integrated risk assessment approach, considering the occurring coupled processes, beside others, the potential reactivation of existing faults. In this context, hydraulic and mechanical parameter uncertainties as well as different injection rates have to be considered and quantified to elaborate reliable environmental impact assessments. Consequently, the required sensitivity analyses consume significant computational time due to the high number of realizations that have to be carried out. Due to the high computational costs of two-way coupled simulations in large-scale 3D multiphase fluid flow systems, these are not applicable for the purpose of uncertainty and risk assessments. Hence, an innovative semi-analytical hydromechanical coupling approach for hydraulic fault reactivation will be introduced. This approach determines the void ratio evolution in representative fault elements using one preliminary base simulation, considering one model geometry and one set of hydromechanical parameters. The void ratio development is then approximated and related to one reference pressure at the base of the fault. The parametrization of the resulting functions is then directly implemented into a multiphase fluid flow simulator to carry out the semi-analytical coupling for the simulation of hydromechanical processes. Hereby, the iterative parameter exchange between the multiphase and mechanical simulators is omitted, since the update of porosity and permeability is controlled by one reference pore pressure at the fault base. The suggested procedure is capable to reduce the computational time required by coupled hydromechanical simulations of a multitude of injection rates by a factor of up to 15.

  17. Computer assisted collimation gamma camera: A new approach to imaging contaminated tissues

    International Nuclear Information System (INIS)

    Quartuccio, M.; Franck, D.; Pihet, P.; Begot, S.; Jeanguillaume, C.

    2000-01-01

    Measurement systems with the capability of imaging tissues contaminated with radioactive materials would find relevant applications in medical physics research and possibly in health physics. The latter in particular depends critically on the performance achieved for sensitivity and spatial resolution. An original approach of computer assisted collimation gamma camera (French acronym CACAO) which could meet suitable characteristics has been proposed elsewhere. CACAO requires detectors with high spatial resolution. The present work was aimed at investigating the application of the CACAO principle on a laboratory testing bench using silicon detectors made of small pixels. (author)

  18. Computer assisted collimation gamma camera: A new approach to imaging contaminated tissues

    Energy Technology Data Exchange (ETDEWEB)

    Quartuccio, M.; Franck, D.; Pihet, P.; Begot, S.; Jeanguillaume, C

    2000-07-01

    Measurement systems with the capability of imaging tissues contaminated with radioactive materials would find relevant applications in medical physics research and possibly in health physics. The latter in particular depends critically on the performance achieved for sensitivity and spatial resolution. An original approach of computer assisted collimation gamma camera (French acronym CACAO) which could meet suitable characteristics has been proposed elsewhere. CACAO requires detectors with high spatial resolution. The present work was aimed at investigating the application of the CACAO principle on a laboratory testing bench using silicon detectors made of small pixels. (author)

  19. Seismic Safety Margins Research Program (Phase I). Project VII. Systems analysis specification of computational approach

    International Nuclear Information System (INIS)

    Wall, I.B.; Kaul, M.K.; Post, R.I.; Tagart, S.W. Jr.; Vinson, T.J.

    1979-02-01

    An initial specification is presented of a computation approach for a probabilistic risk assessment model for use in the Seismic Safety Margin Research Program. This model encompasses the whole seismic calculational chain from seismic input through soil-structure interaction, transfer functions to the probability of component failure, integration of these failures into a system model and thereby estimate the probability of a release of radioactive material to the environment. It is intended that the primary use of this model will be in sensitivity studies to assess the potential conservatism of different modeling elements in the chain and to provide guidance on priorities for research in seismic design of nuclear power plants

  20. Tensor Voting A Perceptual Organization Approach to Computer Vision and Machine Learning

    CERN Document Server

    Mordohai, Philippos

    2006-01-01

    This lecture presents research on a general framework for perceptual organization that was conducted mainly at the Institute for Robotics and Intelligent Systems of the University of Southern California. It is not written as a historical recount of the work, since the sequence of the presentation is not in chronological order. It aims at presenting an approach to a wide range of problems in computer vision and machine learning that is data-driven, local and requires a minimal number of assumptions. The tensor voting framework combines these properties and provides a unified perceptual organiza