WorldWideScience

Sample records for computationally efficient approach

  1. An Efficient Approach for Computing Silhouette Coefficients

    Directory of Open Access Journals (Sweden)

    Moh'd B. Al- Zoubi

    2008-01-01

    Full Text Available One popular approach for finding the best number of clusters (K in a data set is through computing the silhouette coefficients. The silhouette coefficients for different values of K, are first found and then the maximum value of these coefficients is chosen. However, computing the silhouette coefficient for different Ks is a very time consuming process. This is due to the amount of CPU time spent on distance calculations. A proposed approach to compute the silhouette coefficient quickly had been presented. The approach was based on decreasing the number of addition operations when computing distances. The results were efficient and more than 50% of the CPU time was achieved when applied to different data sets.

  2. Efficient Approach for Load Balancing in Virtual Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Harvinder singh

    2014-10-01

    Full Text Available Cloud computing technology is changing the focus of IT world and it is becoming famous because of its great characteristics. Load balancing is one of the main challenges in cloud computing for distributing workloads across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources. Successful load balancing optimizes resource use, maximizes throughput, minimizes response time, and avoids overload. The objective of this paper to propose an approach for scheduling algorithms that can maintain the load balancing and provides better improved strategies through efficient job scheduling and modified resource allocation techniques. The results discussed in this paper, based on existing round robin, least connection, throttled load balance, fastest response time and a new proposed algorithm fastest with least connection scheduling algorithms. This new algorithm identifies the overall response time and data centre processing time is improved as well as cost is reduced in comparison to the existing scheduling parameters.

  3. EFFICIENT APPROACH FOR LOAD BALANCING IN VIRTUAL CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Harvinder Singh

    2015-10-01

    Full Text Available Cloud computing technology is changing the focus of IT world and it is becoming famous because of its great characteristics. Load balancing is one of the main challenges in cloud computing for distributing workloads across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources. Successful load balancing optimizes resource use, maximizes throughput, minimizes response time, and avoids overload. The objective of this paper to propose an approach for scheduling algorithms that can maintain the load balancing and provides better improved strategies through efficient job scheduling and modified resource allocation techniques. The results discussed in this paper, based on existing round robin, least connection, throttled load balance, fastest response time and a new proposed algorithm fastest with least connection scheduling algorithms. This new algorithm identifies the overall response time and data centre processing time is improved as well as cost is reduced in comparison to the existing scheduling parameters.

  4. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    Science.gov (United States)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  5. A Monomial Chaos Approach for Efficient Uncertainty Quantification in Computational Fluid Dynamics

    NARCIS (Netherlands)

    Witteveen, J.A.S.; Bijl, H.

    2006-01-01

    A monomial chaos approach is proposed for efficient uncertainty quantification in nonlinear computational problems. Propagating uncertainty through nonlinear equations can still be computationally intensive for existing uncertainty quantification methods. It usually results in a set of nonlinear equ

  6. Development of a computationally efficient urban modeling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;

    2016-01-01

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...

  7. Development of a computationally efficient urban modeling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Murla, Damian; Ntegeka, Victor

    2016-01-01

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can be a...

  8. MADLVF: An Energy Efficient Resource Utilization Approach for Cloud Computing

    Directory of Open Access Journals (Sweden)

    J.K. Verma

    2014-06-01

    Full Text Available Last few decades have remained the witness of steeper growth in demand for higher computational power. It is merely due to shift from the industrial age to Information and Communication Technology (ICT age which was marginally the result of digital revolution. Such trend in demand caused establishment of large-scale data centers situated at geographically apart locations. These large-scale data centers consume a large amount of electrical energy which results into very high operating cost and large amount of carbon dioxide (CO2 emission due to resource underutilization. We propose MADLVF algorithm to overcome the problems such as resource underutilization, high energy consumption, and large CO2 emissions. Further, we present a comparative study between the proposed algorithm and MADRS algorithms showing proposed methodology outperforms over the existing one in terms of energy consumption and the number of VM migrations.

  9. Computational approaches for efficiently modelling of small atmospheric clusters

    DEFF Research Database (Denmark)

    Elm, Jonas; Mikkelsen, Kurt Valentin

    2014-01-01

    Utilizing a comprehensive test set of 205 clusters of atmospheric relevance, we investigate how different DFT functionals (M06-2X, PW91, ωB97X-D) and basis sets (6-311++G(3df,3pd), 6-31++G(d,p), 6-31+G(d)) affect the thermal contribution to the Gibbs free energy and single point energy. Reducing...... the basis set used in the geometry and frequency calculation from 6-311++G(3df,3pd) → 6-31++G(d,p) implies a significant speed-up in computational time and only leads to small errors in the thermal contribution to the Gibbs free energy and subsequent coupled cluster single point energy calculation....

  10. A computationally efficient approach for template matching-based image registration

    Indian Academy of Sciences (India)

    Vilas H Gaidhane; Yogesh V Hote; Vijander Singh

    2014-04-01

    Image registration using template matching is an important step in image processing. In this paper, a simple, robust and computationally efficient approach is presented. The proposed approach is based on the properties of a normalized covariance matrix. The main advantage of the proposed approach is that the image matching can be achieved without calculating eigenvalues and eigenvectors of a covariance matrix, hence reduces the computational complexity. The experimental results show that the proposed approach performs better in the presence of various noises and rigid geometric transformations.

  11. A Computationally Efficient and Adaptive Approach for Online Embedded Machinery Diagnosis in Harsh Environments

    Directory of Open Access Journals (Sweden)

    Chuan Jiang

    2013-01-01

    Full Text Available Condition-based monitoring (CBM has advanced to the stage where industry is now demanding machinery that possesses self-diagnosis ability. This need has spurred the CBM research to be applicable in more expanded areas over the past decades. There are two critical issues in implementing CBM in harsh environments using embedded systems: computational efficiency and adaptability. In this paper, a computationally efficient and adaptive approach including simple principal component analysis (SPCA for feature dimensionality reduction and K-means clustering for classification is proposed for online embedded machinery diagnosis. Compared with the standard principal component analysis (PCA and kernel principal component analysis (KPCA, SPCA is adaptive in nature and has lower algorithm complexity when dealing with a large amount of data. The effectiveness of the proposed approach is firstly validated using a standard rolling element bearing test dataset on a personal computer. It is then deployed on an embedded real-time controller and used to monitor a rotating shaft. It was found that the proposed approach scaled well, whereas the standard PCA-based approach broke down when data quantity increased to a certain level. Furthermore, the proposed approach achieved 90% accuracy when diagnosing an induced fault compared to 59% accuracy obtained using the standard PCA-based approach.

  12. A novel approach to computationally efficient algorithms for transmission loss and line flow formulations

    Energy Technology Data Exchange (ETDEWEB)

    Nanda, J.; Lai, L.L.; Ma, J.T.; Rajkumar, N. [City University, London (United Kingdom). Energy Systems Group; Nanda, A. [Joslyn High Voltage Corp., Chicago, IL (United States); Prasad, M. [ABB, Neww Delhi (India)

    1999-11-01

    This paper presents a novel approach to powerful, effective and computationally efficient algorithms for formulation and evaluation of transmission loss and line flow through efficient loss coefficients and distribution factors, respectively, which are uniquely suitable for real term application. These loss coefficients and distribution factors are generated extremely elegantly and efficiently from the hidden treasures of an available load flow solution with trivial computational burden. Results on few IEEE Test systems are extremely exciting which reveal that the loss coefficients evaluated at the normal operating conditions are quite robust and for all practical purposes need not be re-evaluated for wide changes in system operating conditions for evaluation of transmission loss or economic load dispatch solution. (author)

  13. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    Science.gov (United States)

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  14. An Efficient Approach for Fast and Accurate Voltage Stability Margin Computation in Large Power Grids

    Directory of Open Access Journals (Sweden)

    Heng-Yi Su

    2016-11-01

    Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.

  15. A Computationally Efficient State Space Approach to Estimating Multilevel Regression Models and Multilevel Confirmatory Factor Models.

    Science.gov (United States)

    Gu, Fei; Preacher, Kristopher J; Wu, Wei; Yung, Yiu-Fai

    2014-01-01

    Although the state space approach for estimating multilevel regression models has been well established for decades in the time series literature, it does not receive much attention from educational and psychological researchers. In this article, we (a) introduce the state space approach for estimating multilevel regression models and (b) extend the state space approach for estimating multilevel factor models. A brief outline of the state space formulation is provided and then state space forms for univariate and multivariate multilevel regression models, and a multilevel confirmatory factor model, are illustrated. The utility of the state space approach is demonstrated with either a simulated or real example for each multilevel model. It is concluded that the results from the state space approach are essentially identical to those from specialized multilevel regression modeling and structural equation modeling software. More importantly, the state space approach offers researchers a computationally more efficient alternative to fit multilevel regression models with a large number of Level 1 units within each Level 2 unit or a large number of observations on each subject in a longitudinal study.

  16. Quantum propagation of electronic excitations in macromolecules: A computationally efficient multiscale approach

    Science.gov (United States)

    Schneider, E.; a Beccara, S.; Mascherpa, F.; Faccioli, P.

    2016-07-01

    We introduce a theoretical approach to study the quantum-dissipative dynamics of electronic excitations in macromolecules, which enables to perform calculations in large systems and cover long-time intervals. All the parameters of the underlying microscopic Hamiltonian are obtained from ab initio electronic structure calculations, ensuring chemical detail. In the short-time regime, the theory is solvable using a diagrammatic perturbation theory, enabling analytic insight. To compute the time evolution of the density matrix at intermediate times, typically ≲ps , we develop a Monte Carlo algorithm free from any sign or phase problem, hence computationally efficient. Finally, the dynamics in the long-time and large-distance limit can be studied combining the microscopic calculations with renormalization group techniques to define a rigorous low-resolution effective theory. We benchmark our Monte Carlo algorithm against the results obtained in perturbation theory and using a semiclassical nonperturbative scheme. Then, we apply it to compute the intrachain charge mobility in a realistic conjugated polymer.

  17. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    Science.gov (United States)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-07-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  18. Unsupervised Approaches for Post-Processing in Computationally Efficient Waveform-Similarity-Based Earthquake Detection

    Science.gov (United States)

    Bergen, K.; Yoon, C. E.; OReilly, O. J.; Beroza, G. C.

    2015-12-01

    Recent improvements in computational efficiency for waveform correlation-based detections achieved by new methods such as Fingerprint and Similarity Thresholding (FAST) promise to allow large-scale blind search for similar waveforms in long-duration continuous seismic data. Waveform similarity search applied to datasets of months to years of continuous seismic data will identify significantly more events than traditional detection methods. With the anticipated increase in number of detections and associated increase in false positives, manual inspection of the detection results will become infeasible. This motivates the need for new approaches to process the output of similarity-based detection. We explore data mining techniques for improved detection post-processing. We approach this by considering similarity-detector output as a sparse similarity graph with candidate events as vertices and similarities as weighted edges. Image processing techniques are leveraged to define candidate events and combine results individually processed at multiple stations. Clustering and graph analysis methods are used to identify groups of similar waveforms and assign a confidence score to candidate detections. Anomaly detection and classification are applied to waveform data for additional false detection removal. A comparison of methods will be presented and their performance will be demonstrated on a suspected induced and non-induced earthquake sequence.

  19. A model order reduction approach to construct efficient and reliable virtual charts in computational homogenisation

    OpenAIRE

    Kerfriden, Pierre; Goury, Olivier; Khac Chi, Hoang; Bordas, Stéphane

    2014-01-01

    Computational homogenisation is a widely spread technique to calculate the overall properties of a composite material from the knowledge of the constitutive laws of its microscopic constituents [1, 2]. Indeed, it relies on fewer assumptions than analytical or semi-analytical homogenisation approaches and can be used to coarse-grain a large range of micro-mechanical models. However, this accuracy comes at large computational costs, which prevents computational homogenisation from b...

  20. Green Computing – An Eco friendly Approach for Energy Efficiency and Minimizing E-Waste

    Directory of Open Access Journals (Sweden)

    Vinoth Kumar T., Kiruthiga P.

    2014-05-01

    Full Text Available The need for environmentally friendly computing gadgets and energy saving devices, under the auspices of „Green Computing‟ has become a global phenomenon with the aim to reduce environmental decadence that emanates from abuse and the rising threat of global warming. . “Green computing” represents environmentally responsible way to reduce power and environmental e-waste. Green computing is the practice of using computing resources efficiently. The goals are to reduce the use of hazardous materials, maximize energy efficiency during the product's lifetime, and promote recyclability or biodegradability of defunct products and factory waste. It‟s known that as the economy expands, the demand for computing devices rises as business and individuals seek faster way of doing things –„‟The Computing way‟‟. Information technological devices are upgraded rapidly due to the need for speed, flexibility, simplicity and cost effectiveness; thus outdating the previous technology. Hence we need to implement energy-efficient central processing units (CPUs, servers and peripherals with reduced resource consumption and proper disposal of electronic waste (e-waste.

  1. Computer-aided cluster expansion: An efficient algebraic approach for open quantum many-particle systems

    Science.gov (United States)

    Foerster, A.; Leymann, H. A. M.; Wiersig, J.

    2017-03-01

    We introduce an equation of motion approach that allows for an approximate evaluation of the time evolution of a quantum system, where the algebraic work to derive the equations of motion is done by the computer. The introduced procedures offer a variety of different types of approximations applicable for finite systems with strong coupling as well as for arbitrary large systems where augmented mean-field theories like the cluster expansion can be applied.

  2. Efficiently computing pathway free energies: New approaches based on chain-of-replica and Non-Boltzmann Bennett reweighting schemes.

    Science.gov (United States)

    Hudson, Phillip S; White, Justin K; Kearns, Fiona L; Hodoscek, Milan; Boresch, Stefan; Lee Woodcock, H

    2015-05-01

    Accurately modeling condensed phase processes is one of computation's most difficult challenges. Include the possibility that conformational dynamics may be coupled to chemical reactions, where multiscale (i.e., QM/MM) methods are needed, and this task becomes even more daunting. Free energy simulations (i.e., molecular dynamics), multiscale modeling, and reweighting schemes. Herein, we present two new approaches for mitigating the aforementioned challenges. The first is a new chain-of-replica method (off-path simulations, OPS) for computing potentials of mean force (PMFs) along an easily defined reaction coordinate. This development is coupled with a new distributed, highly-parallel replica framework (REPDstr) within the CHARMM package. Validation of these new schemes is carried out on two processes that undergo conformational changes. First is the simple torsional rotation of butane, while a much more challenging glycosidic rotation (in vacuo and solvated) is the second. Additionally, a new approach that greatly improves (i.e., possibly an order of magnitude) the efficiency of computing QM/MM PMFs is introduced and compared to standard schemes. Our efforts are grounded in the recently developed method for efficiently computing QM-based free energies (i.e., QM-Non-Boltzmann Bennett, QM-NBB). Again, we validate this new technique by computing the QM/MM PMF of butane's torsional rotation. The OPS-REPDstr method is a promising new approach that overcomes many limitations of standard pathway simulations in CHARMM. The combination of QM-NBB with pathway techniques is very promising as it offers significant advantages over current procedures. Efficiently computing potentials of mean force is a major, unresolved, area of interest. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.

  3. Efficient Discovery of Novel Multicomponent Mixtures for Hydrogen Storage: A Combined Computational/Experimental Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wolverton, Christopher [Northwestern Univ., Evanston, IL (United States). Dept. of Materials Science and Engineering; Ozolins, Vidvuds [Univ. of California, Los Angeles, CA (United States). Dept. of Materials Science and Engineering; Kung, Harold H. [Northwestern Univ., Evanston, IL (United States). Dept. of Chemical and Biological Engineering; Yang, Jun [Ford Scientific Research Lab., Dearborn, MI (United States); Hwang, Sonjong [California Inst. of Technology (CalTech), Pasadena, CA (United States). Dept. of Chemistry and Chemical Engineering; Shore, Sheldon [The Ohio State Univ., Columbus, OH (United States). Dept. of Chemistry and Biochemistry

    2016-11-28

    The objective of the proposed program is to discover novel mixed hydrides for hydrogen storage, which enable the DOE 2010 system-level goals. Our goal is to find a material that desorbs 8.5 wt.% H2 or more at temperatures below 85°C. The research program will combine first-principles calculations of reaction thermodynamics and kinetics with material and catalyst synthesis, testing, and characterization. We will combine materials from distinct categories (e.g., chemical and complex hydrides) to form novel multicomponent reactions. Systems to be studied include mixtures of complex hydrides and chemical hydrides [e.g. LiNH2+NH3BH3] and nitrogen-hydrogen based borohydrides [e.g. Al(BH4)3(NH3)3]. The 2010 and 2015 FreedomCAR/DOE targets for hydrogen storage systems are very challenging, and cannot be met with existing materials. The vast majority of the work to date has delineated materials into various classes, e.g., complex and metal hydrides, chemical hydrides, and sorbents. However, very recent studies indicate that mixtures of storage materials, particularly mixtures between various classes, hold promise to achieve technological attributes that materials within an individual class cannot reach. Our project involves a systematic, rational approach to designing novel multicomponent mixtures of materials with fast hydrogenation/dehydrogenation kinetics and favorable thermodynamics using a combination of state-of-the-art scientific computing and experimentation. We will use the accurate predictive power of first-principles modeling to understand the thermodynamic and microscopic kinetic processes involved in hydrogen release and uptake and to design new material/catalyst systems with improved properties. Detailed characterization and atomic-scale catalysis experiments will elucidate the effect of dopants and nanoscale catalysts in achieving fast kinetics and reversibility. And

  4. A flexible, extendable, modular and computationally efficient approach to scattering-integral-based seismic full waveform inversion

    Science.gov (United States)

    Schumacher, F.; Friederich, W.; Lamara, S.

    2016-02-01

    We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be

  5. A generalized computationally efficient inverse characterization approach combining direct inversion solution initialization with gradient-based optimization

    Science.gov (United States)

    Wang, Mengyu; Brigham, John C.

    2017-03-01

    A computationally efficient gradient-based optimization approach for inverse material characterization from incomplete system response measurements that can utilize a generally applicable parameterization (e.g., finite element-type parameterization) is presented and evaluated. The key to this inverse characterization algorithm is the use of a direct inversion strategy with Gappy proper orthogonal decomposition (POD) response field estimation to initialize the inverse solution estimate prior to gradient-based optimization. Gappy POD is used to estimate the complete (i.e., all components over the entire spatial domain) system response field from incomplete (e.g., partial spatial distribution) measurements obtained from some type of system testing along with some amount of a priori information regarding the potential distribution of the unknown material property. The estimated complete system response is used within a physics-based direct inversion procedure with a finite element-type parameterization to estimate the spatial distribution of the desired unknown material property with minimal computational expense. Then, this estimated spatial distribution of the unknown material property is used to initialize a gradient-based optimization approach, which uses the adjoint method for computationally efficient gradient calculations, to produce the final estimate of the material property distribution. The three-step [(1) Gappy POD, (2) direct inversion, and (3) gradient-based optimization] inverse characterization approach is evaluated through simulated test problems based on the characterization of elastic modulus distributions with localized variations (e.g., inclusions) within simple structures. Overall, this inverse characterization approach is shown to efficiently and consistently provide accurate inverse characterization estimates for material property distributions from incomplete response field measurements. Moreover, the solution procedure is shown to be capable

  6. Algorithmic design of a noise-resistant and efficient closed-loop deep brain stimulation system: A computational approach

    Science.gov (United States)

    Karamintziou, Sofia D.; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G.; Tagaris, George A.; Sakas, Damianos E.; Polychronaki, Georgia E.; Tsirogiannis, George L.; David, Olivier; Nikita, Konstantina S.

    2017-01-01

    Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson’s disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications. PMID:28222198

  7. Computation of the Isotropic Hyperfine Coupling Constant: Efficiency and Insights from a New Approach Based on Wave Function Theory.

    Science.gov (United States)

    Giner, Emmanuel; Tenti, Lorenzo; Angeli, Celestino; Ferré, Nicolas

    2017-02-14

    The present paper reports an original computational strategy for the computation of the isotropic hyperfine coupling constants (hcc). The algorithm proposed here is based on an approach recently introduced by some of the authors, namely, the first-order breathing orbital self-consistent field (FOBO-SCF). The approach is an almost parameter-free wave function method capable to accurately treat the spin delocalization together with the spin polarization effects while staying in a restricted formalism and avoiding spin contamination. The efficiency of the method is tested on a series of small radicals, among which four nitroxide radicals and the comparison with high-level ab initio methods show very encouraging results. On the basis of these results, the method is then applied to compute the hcc of a challenging system, namely, the DEPMPO-OOH radical in various conformations. The reference values obtained on such a large system allows us to validate a cheap computational method based on density functional theory (DFT). Another interesting feature of the model applied here is that it allows for the rationalization of the results according to a relatively simple scheme based on a two-step mechanism. More precisely, the results are analyzed in terms of two separated contributions: first the spin delocalization and then the spin polarization.

  8. A Computationally-Efficient Kinetic Approach for Gas/Particle Mass Transfer Treatments: Development, Testing, and 3-D Application

    Science.gov (United States)

    Hu, X.; Zhang, Y.

    2007-05-01

    The Weather Research and Forecast/Chemistry Model (WRF/Chem) that simulates chemistry simultaneously with meteorology has recently been developed for real-time forecasting by the U.S. National Center for Atmospheric Research (NCAR) and National Oceanic & Atmospheric Administration (NOAA). As one of the six air quality models, WRF/Chem with a modal aerosol module has been applied for ozone and PM2.5 ensemble forecasts over eastern North America as part of the 2004 New England Air Quality Study (NEAQS) program (NEAQS-2004). Significant differences exist in the partitioning of volatile species (e.g., ammonium and nitrate) simulated by the six models. Model biases are partially attributed to the equilibrium assumption used in the gas/particles mass transfer approach in some models. Development of a more accurate, yet computationally- efficient gas/particle mass transfer approach for three-dimensional (3-D) applications, in particular, real-time forecasting, is therefore warranted. Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) has been implemented into WRF/Chem (referred to as WRF/Chem-MADRID). WRF/Chem-MADRID offers three gas/particle partitioning treatments: equilibrium, kinetic, and hybrid approaches. The equilibrium approach is computationally-efficient and commonly used in 3-D air quality models but less accurate under certain conditions (e.g., in the presence of coarse, reactive particles such as PM containing sea-salts in the coastal areas). The kinetic approach is accurate but computationally-expensive, limiting its 3-D applications. The hybrid approach attempts to provide a compromise between merits and drawbacks of the two approaches by treating fine PM (typically MADRID has recently been developed for 3-D applications based on an Analytical Predictor of Condensation (referred to as kinetic/APC). In this study, WRF/Chem-MADRID with the kinetic/APC approach will be further evaluated along with the equilibrium and hybrid approaches

  9. An efficient computational approach to characterize DSC-MRI signals arising from three-dimensional heterogeneous tissue structures.

    Science.gov (United States)

    Semmineh, Natenael B; Xu, Junzhong; Boxerman, Jerrold L; Delaney, Gary W; Cleary, Paul W; Gore, John C; Quarles, C Chad

    2014-01-01

    The systematic investigation of susceptibility-induced contrast in MRI is important to better interpret the influence of microvascular and microcellular morphology on DSC-MRI derived perfusion data. Recently, a novel computational approach called the Finite Perturber Method (FPM), which enables the study of susceptibility-induced contrast in MRI arising from arbitrary microvascular morphologies in 3D has been developed. However, the FPM has lower efficiency in simulating water diffusion especially for complex tissues. In this work, an improved computational approach that combines the FPM with a matrix-based finite difference method (FDM), which we call the Finite Perturber the Finite Difference Method (FPFDM), has been developed in order to efficiently investigate the influence of vascular and extravascular morphological features on susceptibility-induced transverse relaxation. The current work provides a framework for better interpreting how DSC-MRI data depend on various phenomena, including contrast agent leakage in cancerous tissues and water diffusion rates. In addition, we illustrate using simulated and micro-CT extracted tissue structures the improved FPFDM along with its potential applications and limitations.

  10. Efficient Seeds Computation Revisited

    CERN Document Server

    Christou, Michalis; Iliopoulos, Costas S; Kubica, Marcin; Pissis, Solon P; Radoszewski, Jakub; Rytter, Wojciech; Szreder, Bartosz; Walen, Tomasz

    2011-01-01

    The notion of the cover is a generalization of a period of a string, and there are linear time algorithms for finding the shortest cover. The seed is a more complicated generalization of periodicity, it is a cover of a superstring of a given string, and the shortest seed problem is of much higher algorithmic difficulty. The problem is not well understood, no linear time algorithm is known. In the paper we give linear time algorithms for some of its versions --- computing shortest left-seed array, longest left-seed array and checking for seeds of a given length. The algorithm for the last problem is used to compute the seed array of a string (i.e., the shortest seeds for all the prefixes of the string) in $O(n^2)$ time. We describe also a simpler alternative algorithm computing efficiently the shortest seeds. As a by-product we obtain an $O(n\\log{(n/m)})$ time algorithm checking if the shortest seed has length at least $m$ and finding the corresponding seed. We also correct some important details missing in th...

  11. An efficient approach of EEG feature extraction and classification for brain computer interface

    Institute of Scientific and Technical Information of China (English)

    Wu Ting; Yan Guozheng; Yang Banghua

    2009-01-01

    In the study of brain-computer interfaces, a method of feature extraction and classification used for two kinds of imaginations is proposed. It considers Euclidean distance between mean traces recorded from the channels with two kinds of imaginations as a feature, and determines imagination classes using threshold value. It analyzed the background of experiment and theoretical foundation referring to the data sets of BCI 2003, and compared the classification precision with the best result of the competition. The result shows that the method has a high precision and is advantageous for being applied to practical systems.

  12. A Computationally Efficient Approach for Calculating Galaxy Two-Point Correlations

    CERN Document Server

    Demina, Regina; BenZvi, Segev; Hindrichs, Otto

    2016-01-01

    We develop a modification to the calculation of the two-point correlation function commonly used in the analysis of large scale structure in cosmology. An estimator of the two-point correlation function is constructed by contrasting the observed distribution of galaxies with that of a uniformly populated random catalog. Using the assumption that the distribution of random galaxies in redshift is independent of angular position allows us to replace pairwise combinatorics with fast integration over probability maps. The new method significantly reduces the computation time while simultaneously increasing the precision of the calculation.

  13. Efficient computation of the spontaneous decay rate of arbitrarily shaped 3D nanosized resonators: a Krylov model-order reduction approach

    NARCIS (Netherlands)

    Zimmerling, J.T.; Wei, L.; Urbach, H.P.; Remis, R.F.

    2016-01-01

    We present a Krylov model-order reduction approach to efficiently compute the spontaneous decay (SD) rate of arbitrarily shaped 3D nanosized resonators. We exploit the symmetry of Maxwell’s equations to efficiently construct so-called reduced-order models that approximate the SD rate of a quantum

  14. COMPUTATION OF THE FULL ENERGY PEAK EFFICIENCY OF AN HPGE DETECTOR USING A NEW COMPACT SIMULATION ANALYTICAL APPROACH FOR SPHERICAL SOURCES

    Directory of Open Access Journals (Sweden)

    AHMED M. EL-KHATIB

    2013-10-01

    Full Text Available The full energy peak efficiency of HPGe detector is computed using a new analytical approach. The approach explains the effect of self-attenuation of the source matrix, the attenuation by the source container and the detector housing materials on the detector efficiency. The experimental calibration process was done using radioactive spherical sources containing aqueous 152Eu radionuclide which produces photons with a wide range of energies from 121 up to 1408 keV. The comparison shows a good agreement between the measured and calculated efficiencies for the detector using spherical sources.

  15. Efficient computation of argumentation semantics

    CERN Document Server

    Liao, Beishui

    2013-01-01

    Efficient Computation of Argumentation Semantics addresses argumentation semantics and systems, introducing readers to cutting-edge decomposition methods that drive increasingly efficient logic computation in AI and intelligent systems. Such complex and distributed systems are increasingly used in the automation and transportation systems field, and particularly autonomous systems, as well as more generic intelligent computation research. The Series in Intelligent Systems publishes titles that cover state-of-the-art knowledge and the latest advances in research and development in intelligen

  16. Growing Cloud Computing Efficiency

    Directory of Open Access Journals (Sweden)

    Dr. Mohamed F. AlAjmi, Dr. Arun Sharma, Shakir Khan

    2012-05-01

    Full Text Available Cloud computing is basically altering the expectation for how and when computing, storage and networking assets should be allocated, managed and devoted. End-users are progressively more sensitive in response time of services they ingest. Service Developers wish for the Service Providers to make sure or give the ability for dynamically assigning and managing resources in respond to alter the demand patterns in real-time. Ultimately, Service Providers are under anxiety to build their infrastructure to facilitate real-time end-to-end visibility and energetic resource management with well grained control to decrease total cost of tenure for improving quickness. What is required to rethink of the underlying operating system and management infrastructure to put up the on-going renovation of data centre from the traditional server-centric architecture model to a cloud or network centric model? This paper projects and describes a indication model for a network centric data centre infrastructure management heap that make use of it and validates key ideas that have enabled dynamism, the quality of being scalable, reliability and security in the telecommunication industry to the computing engineering. Finally, the paper will explain a proof of concept classification that was implemented to show how dynamic resource management can be enforced to enable real-time service guarantee for network centric data centre architecture.

  17. Computationally efficient and flexible modular modelling approach for river and urban drainage systems based on surrogate conceptual models

    Science.gov (United States)

    Wolfs, Vincent; Willems, Patrick

    2015-04-01

    Water managers rely increasingly on mathematical simulation models that represent individual parts of the water system, such as the river, sewer system or waste water treatment plant. The current evolution towards integral water management requires the integration of these distinct components, leading to an increased model scale and scope. Besides this growing model complexity, certain applications gained interest and importance, such as uncertainty and sensitivity analyses, auto-calibration of models and real time control. All these applications share the need for models with a very limited calculation time, either for performing a large number of simulations, or a long term simulation followed by a statistical post-processing of the results. The use of the commonly applied detailed models that solve (part of) the de Saint-Venant equations is infeasible for these applications or such integrated modelling due to several reasons, of which a too long simulation time and the inability to couple submodels made in different software environments are the main ones. Instead, practitioners must use simplified models for these purposes. These models are characterized by empirical relationships and sacrifice model detail and accuracy for increased computational efficiency. The presented research discusses the development of a flexible integral modelling platform that complies with the following three key requirements: (1) Include a modelling approach for water quantity predictions for rivers, floodplains, sewer systems and rainfall runoff routing that require a minimal calculation time; (2) A fast and semi-automatic model configuration, thereby making maximum use of data of existing detailed models and measurements; (3) Have a calculation scheme based on open source code to allow for future extensions or the coupling with other models. First, a novel and flexible modular modelling approach based on the storage cell concept was developed. This approach divides each

  18. Computational approaches to energy materials

    CERN Document Server

    Catlow, Richard; Walsh, Aron

    2013-01-01

    The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process.   Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the

  19. A primer on the energy efficiency of computing

    Energy Technology Data Exchange (ETDEWEB)

    Koomey, Jonathan G. [Research Fellow, Steyer-Taylor Center for Energy Policy and Finance, Stanford University (United States)

    2015-03-30

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  20. Resource-efficient linear optical quantum computation.

    Science.gov (United States)

    Browne, Daniel E; Rudolph, Terry

    2005-07-01

    We introduce a scheme for linear optics quantum computation, that makes no use of teleported gates, and requires stable interferometry over only the coherence length of the photons. We achieve a much greater degree of efficiency and a simpler implementation than previous proposals. We follow the "cluster state" measurement based quantum computational approach, and show how cluster states may be efficiently generated from pairs of maximally polarization entangled photons using linear optical elements. We demonstrate the universality and usefulness of generic parity measurements, as well as introducing the use of redundant encoding of qubits to enable utilization of destructive measurements--both features of use in a more general context.

  1. MiR-RACE, a new efficient approach to determine the precise sequences of computationally identified trifoliate orange (Poncirus trifoliata microRNAs.

    Directory of Open Access Journals (Sweden)

    Changnian Song

    Full Text Available BACKGROUND: Among the hundreds of genes encoding miRNAs in plants reported, much more were predicted by numerous computational methods. However, unlike protein-coding genes defined by start and stop codons, the ends of miRNA molecules do not have characteristics that can be used to define the mature miRNAs exactly, which made computational miRNA prediction methods often cannot predict the accurate location of the mature miRNA in a precursor with nucleotide-level precision. To our knowledge, there haven't been reports about comprehensive strategies determining the precise sequences, especially two termini, of these miRNAs. METHODS: In this study, we report an efficient method to determine the precise sequences of computationally predicted microRNAs (miRNAs that combines miRNA-enriched library preparation, two specific 5' and 3' miRNA RACE (miR-RACE PCR reactions, and sequence-directed cloning, in which the most challenging step is the two specific gene specific primers designed for the two RACE reactions. miRNA-mediated mRNA cleavage by RLM-5' RACE and sequencing were carried out to validate the miRNAs detected. Real-time PCR was used to analyze the expression of each miRNA. RESULTS: The efficiency of this newly developed method was validated using nine trifoliate orange (Poncirus trifoliata miRNAs predicted computationally. The miRNAs computationally identified were validated by miR-RACE and sequencing. Quantitative analysis showed that they have variable expression. Eight target genes have been experimentally verified by detection of the miRNA-mediated mRNA cleavage in Poncirus trifoliate. CONCLUSION: The efficient and powerful approach developed herein can be successfully used to validate the sequences of miRNAs, especially the termini, which depict the complete miRNA sequence in the computationally predicted precursor.

  2. STRATEGY FOR IMPROVEMENT OF SAFETY AND EFFICIENCY OF COMPUTER-AIDED DESIGN ANALYSIS OF CIVIL ENGINEERING STRUCTURES ON THE BASIS OF THE SYSTEM APPROACH

    Directory of Open Access Journals (Sweden)

    Zaikin Vladimir Genrikhovich

    2012-12-01

    Full Text Available The authors highlight three problems of the age of information technologies and proposes the strategy for their resolution in relation to the computer-aided design of civil engineering structures. The authors express their concerns in respect of globalization of software programmes designated for the analysis of civil engineering structures and employed outside of Russia. The problem of the poor quality of the input data has reached Russia. Lately, the rate of accidents of buildings and structures has been growing not only in Russia. Control over efficiency of design projects is hardly performed. This attitude should be changed. Development and introduction of CAD along with the application the efficient methods of projection of behaviour of building structures are in demand. Computer-aided calculations have the function of a logical nucleus, and they need proper control. The system approach to computer-aided calculations and technologies designated for the projection of accidents is formulated by the authors. Two tasks of the system approach and fundamentals of the strategy for its implementation are formulated. The study of cases of negative results of computer-aided design of engineering structures was performed and multi-component design patterns were developed. Conclusions concerning the results of researches aimed at regular and wide-scale implementation of the strategy fundamentals are formulated. Organizational and innovative actions concerning the projected behaviour of civil engineering structures proposed in the strategy are to facilitate: safety and reliability improvement of buildings and structures; saving of building materials and resources; improvement of labour efficiency of designers; modernization and improvement of accuracy of projected behaviour of buildings and building standards; closer ties between civil and building engineering researchers and construction companies; development of competitive environment to boost

  3. Clustering based gene expression feature selection method: A computational approach to enrich the classifier efficiency of differentially expressed genes

    KAUST Repository

    Abusamra, Heba

    2016-07-20

    The native nature of high dimension low sample size of gene expression data make the classification task more challenging. Therefore, feature (gene) selection become an apparent need. Selecting a meaningful and relevant genes for classifier not only decrease the computational time and cost, but also improve the classification performance. Among different approaches of feature selection methods, however most of them suffer from several problems such as lack of robustness, validation issues etc. Here, we present a new feature selection technique that takes advantage of clustering both samples and genes. Materials and methods We used leukemia gene expression dataset [1]. The effectiveness of the selected features were evaluated by four different classification methods; support vector machines, k-nearest neighbor, random forest, and linear discriminate analysis. The method evaluate the importance and relevance of each gene cluster by summing the expression level for each gene belongs to this cluster. The gene cluster consider important, if it satisfies conditions depend on thresholds and percentage otherwise eliminated. Results Initial analysis identified 7120 differentially expressed genes of leukemia (Fig. 15a), after applying our feature selection methodology we end up with specific 1117 genes discriminating two classes of leukemia (Fig. 15b). Further applying the same method with more stringent higher positive and lower negative threshold condition, number reduced to 58 genes have be tested to evaluate the effectiveness of the method (Fig. 15c). The results of the four classification methods are summarized in Table 11. Conclusions The feature selection method gave good results with minimum classification error. Our heat-map result shows distinct pattern of refines genes discriminating between two classes of leukemia.

  4. Efficient computation of optimal actions.

    Science.gov (United States)

    Todorov, Emanuel

    2009-07-14

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.

  5. An efficient approach to computing third- order scattering of sound by sound with application to parametric arrays.

    Science.gov (United States)

    Johnson, Spencer J; Steer, Michael B

    2014-10-01

    A mathematical description of third-order scattered sound fields is derived using a multi-Gaussian beam (MGB) model that describes the sound field of any arbitrary axially symmetric beam as a series of Gaussian base functions. The third-order intermodulation (IM3) frequency components are produced by considering the cascaded nonlinear secondorder effects when analyzing the interaction between the firstand second-order frequency components during the nonlinear scattering of sound by sound from two noncollinear ultrasonic baffled piston sources. The theory is extended to the modeling of the sound beams generated by parametric transducer arrays, showing that the MGB model can be efficiently used to calculate both the second- and third-order sound fields of the array. Measurements are presented for the IM3 frequency components and parametric array sound fields and comparisons of the model are made with traditional simulation results from direct numerical integration.

  6. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  7. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  8. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  9. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  10. PRCA:A highly efficient computing architecture

    Institute of Scientific and Technical Information of China (English)

    Luo Xingguo

    2014-01-01

    Applications can only reach 8 %~15 % of utilization on modern computer systems. There are many obstacles to improving system efficiency. The key root is the conflict between the fixed general computer architecture and the variable requirements of applications. Proactive reconfigurable computing architecture (PRCA) is proposed to improve computing efficiency. PRCA dynamically constructs an efficient computing ar chitecture for a specific application via reconfigurable technology by perceiving requirements,workload and utilization of computing resources. Proactive decision support system (PDSS),hybrid reconfigurable computing array (HRCA) and reconfigurable interconnect (RIC) are intensively researched as the key technologies. The principles of PRCA have been verified with four applications on a test bed. It is shown that PRCA is feasible and highly efficient.

  11. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  12. Quantum computing: Efficient fault tolerance

    Science.gov (United States)

    Gottesman, Daniel

    2016-12-01

    Dealing with errors in a quantum computer typically requires complex programming and many additional quantum bits. A technique for controlling errors has been proposed that alleviates both of these problems.

  13. Efficient Computational Model of Hysteresis

    Science.gov (United States)

    Shields, Joel

    2005-01-01

    A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.

  14. Efficient computation of spaced seeds

    Directory of Open Access Journals (Sweden)

    Ilie Silvana

    2012-02-01

    Full Text Available Abstract Background The most frequently used tools in bioinformatics are those searching for similarities, or local alignments, between biological sequences. Since the exact dynamic programming algorithm is quadratic, linear-time heuristics such as BLAST are used. Spaced seeds are much more sensitive than the consecutive seed of BLAST and using several seeds represents the current state of the art in approximate search for biological sequences. The most important aspect is computing highly sensitive seeds. Since the problem seems hard, heuristic algorithms are used. The leading software in the common Bernoulli model is the SpEED program. Findings SpEED uses a hill climbing method based on the overlap complexity heuristic. We propose a new algorithm for this heuristic that improves its speed by over one order of magnitude. We use the new implementation to compute improved seeds for several software programs. We compute as well multiple seeds of the same weight as MegaBLAST, that greatly improve its sensitivity. Conclusion Multiple spaced seeds are being successfully used in bioinformatics software programs. Enabling researchers to compute very fast high quality seeds will help expanding the range of their applications.

  15. Efficient computational noise in GLSL

    CERN Document Server

    McEwan, Ian; Gustavson, Stefan; Richardson, Mark

    2012-01-01

    We present GLSL implementations of Perlin noise and Perlin simplex noise that run fast enough for practical consideration on current generation GPU hardware. The key benefits are that the functions are purely computational, i.e. they use neither textures nor lookup tables, and that they are implemented in GLSL version 1.20, which means they are compatible with all current GLSL-capable platforms, including OpenGL ES 2.0 and WebGL 1.0. Their performance is on par with previously presented GPU implementations of noise, they are very convenient to use, and they scale well with increasing parallelism in present and upcoming GPU architectures.

  16. Energy-efficient quantum computing

    Science.gov (United States)

    Ikonen, Joni; Salmilehto, Juha; Möttönen, Mikko

    2017-04-01

    In the near future, one of the major challenges in the realization of large-scale quantum computers operating at low temperatures is the management of harmful heat loads owing to thermal conduction of cabling and dissipation at cryogenic components. This naturally raises the question that what are the fundamental limitations of energy consumption in scalable quantum computing. In this work, we derive the greatest lower bound for the gate error induced by a single application of a bosonic drive mode of given energy. Previously, such an error type has been considered to be inversely proportional to the total driving power, but we show that this limitation can be circumvented by introducing a qubit driving scheme which reuses and corrects drive pulses. Specifically, our method serves to reduce the average energy consumption per gate operation without increasing the average gate error. Thus our work shows that precise, scalable control of quantum systems can, in principle, be implemented without the introduction of excessive heat or decoherence.

  17. A programming approach to computability

    CERN Document Server

    Kfoury, A J; Arbib, Michael A

    1982-01-01

    Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...

  18. Computationally efficient algorithm for fast transients detection

    CERN Document Server

    Soudlenkov, Gene

    2011-01-01

    Computationally inexpensive algorithm for detecting of dispersed transients has been developed using Cumulative Sums (CUSUM) scheme for detecting abrupt changes in statistical characteristics of the signal. The efficiency of the algorithm is demonstrated on pulsar PSR J0835-4510.

  19. Efficient Multi-Party Computation over Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Fehr, Serge; Ishai, Yuval;

    2003-01-01

    by (boolean or arithmetic) circuits over finite fields. We are motivated by two limitations of these techniques: – Generality. Existing protocols do not apply to computation over more general algebraic structures (except via a brute-force simulation of computation in these structures). – Efficiency. The best...... known constant-round protocols do not efficiently scale even to the case of large finite fields. Our contribution goes in these two directions. First, we propose a basis for unconditionally secure MPC over an arbitrary ginite ring, an algebraic object with a much less nice structure than a field...... the usefulness of the above results by presenting a novel application of MPC over (non-field) rings to the round-efficient secure computation of the maximum function. Basic Research in Computer Science (www.brics.dk), funded by the Danish National Research Foundation....

  20. Efficient Multi-Party Computation over Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Fehr, Serge; Ishai, Yuval

    2003-01-01

    by (boolean or arithmetic) circuits over finite fields. We are motivated by two limitations of these techniques: – Generality. Existing protocols do not apply to computation over more general algebraic structures (except via a brute-force simulation of computation in these structures). – Efficiency. The best...... known constant-round protocols do not efficiently scale even to the case of large finite fields. Our contribution goes in these two directions. First, we propose a basis for unconditionally secure MPC over an arbitrary ginite ring, an algebraic object with a much less nice structure than a field...... the usefulness of the above results by presenting a novel application of MPC over (non-field) rings to the round-efficient secure computation of the maximum function. Basic Research in Computer Science (www.brics.dk), funded by the Danish National Research Foundation....

  1. Computational Approaches to Interface Design

    Science.gov (United States)

    Corker; Lebacqz, J. Victor (Technical Monitor)

    1997-01-01

    Tools which make use of computational processes - mathematical, algorithmic and/or knowledge-based - to perform portions of the design, evaluation and/or construction of interfaces have become increasingly available and powerful. Nevertheless, there is little agreement as to the appropriate role for a computational tool to play in the interface design process. Current tools fall into broad classes depending on which portions, and how much, of the design process they automate. The purpose of this panel is to review and generalize about computational approaches developed to date, discuss the tasks which for which they are suited, and suggest methods to enhance their utility and acceptance. Panel participants represent a wide diversity of application domains and methodologies. This should provide for lively discussion about implementation approaches, accuracy of design decisions, acceptability of representational tradeoffs and the optimal role for a computational tool to play in the interface design process.

  2. Efficient Architectural Framework for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Souvik Pal

    2012-06-01

    Full Text Available Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.

  3. Efficient integration method for fictitious domain approaches

    Science.gov (United States)

    Duczek, Sascha; Gabbert, Ulrich

    2015-10-01

    In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.

  4. An efficient iris segmentation approach

    Science.gov (United States)

    Gomai, Abdu; El-Zaart, A.; Mathkour, H.

    2011-10-01

    Iris recognition system became a reliable system for authentication and verification tasks. It consists of five stages: image acquisition, iris segmentation, iris normalization, feature encoding, and feature matching. Iris segmentation stage is one of the most important stages. It plays an essential role to locate the iris efficiently and accurately. In this paper, we present a new approach for iris segmentation using image processing technique. This approach is composed of four main parts. (1) Eliminating reflections of light on the eye image based on inverting the color of the grayscale image, filling holes in the intensity image, and inverting the color of the intensity image to get the original grayscale image without any reflections. (2) Pupil boundary detection based on dividing an eye image to nine sub-images and finding the minimum value of the mean intensity for each sub-image to get a suitable threshold value of pupil. (3) Enhancing the contrast of outer iris boundary using exponential operator to have sharp variation. (4) Outer iris boundary localization based on applying a gray threshold and morphological operations on the rectangular part of an eye image including the pupil and the outer boundaries of iris to find the small radius of outer iris boundary from the center of pupil. The proposed approach has been tested on CASIA v1.0 iris image database and other collected iris image database. The experimental results show that the approach is able to detect pupil and outer iris boundary with high accuracy results approximately 100% and reduce time consuming.

  5. Efficient GPU-based skyline computation

    DEFF Research Database (Denmark)

    Bøgh, Kenneth Sejdenfaden; Assent, Ira; Magnani, Matteo

    2013-01-01

    The skyline operator for multi-criteria search returns the most interesting points of a data set with respect to any monotone preference function. Existing work has almost exclusively focused on efficiently computing skylines on one or more CPUs, ignoring the high parallelism possible in GPUs. In...

  6. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics.

    Science.gov (United States)

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A

    2012-12-11

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as "multistate". These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations.

  7. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    Science.gov (United States)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  8. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  9. A Computational View of Market Efficiency

    CERN Document Server

    Hasanhodzic, Jasmina; Viola, Emanuele

    2009-01-01

    We propose to study market efficiency from a computational viewpoint. Borrowing from theoretical computer science, we define a market to be \\emph{efficient with respect to resources $S$} (e.g., time, memory) if no strategy using resources $S$ can make a profit. As a first step, we consider memory-$m$ strategies whose action at time $t$ depends only on the $m$ previous observations at times $t-m,...,t-1$. We introduce and study a simple model of market evolution, where strategies impact the market by their decision to buy or sell. We show that the effect of optimal strategies using memory $m$ can lead to "market conditions" that were not present initially, such as (1) market bubbles and (2) the possibility for a strategy using memory $m' > m$ to make a bigger profit than was initially possible. We suggest ours as a framework to rationalize the technological arms race of quantitative trading firms.

  10. COMPUTATIONALLY EFFICIENT PRIVATE INFORMATION RETRIEVAL PROTOCOL

    Directory of Open Access Journals (Sweden)

    A. V. Afanasyeva

    2016-03-01

    Full Text Available This paper describes a new computationally efficient private information retrieval protocol for one q-ary symbol retrieving. The main advantage of the proposed solution lies in a low computational complexity of information extraction procedure, as well as the constructive simplicity and flexibility in choosing the system parameters. Such results are based on cosets properties. The proposed protocol has communication complexity slightly worse than the best schemes at the moment, which is based on locally decodable codes, but it can be easily built for any parameters of the system, as opposed to codes. In comparison with similar solutions based on polynomials, the proposed method gains in computational complexity, which is important especially for servers which must service multiple requests from multiple users.

  11. Changing computing paradigms towards power efficiency.

    Science.gov (United States)

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications.

  12. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Science.gov (United States)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  13. Computational Approaches to Vestibular Research

    Science.gov (United States)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  14. Efficient Resource Management in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Rushikesh Shingade

    2015-12-01

    Full Text Available Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Management in Cloud Computing (EFRE model, CloudSim is used as a simulation toolkit that allows simulation of DataCenter in Cloud computing system. The CloudSim toolkit also supports the creation of multiple virtual machines (VMs on a node of a DataCenter where cloudlets (user requests are assigned to virtual machines by scheduling policies. This paper represents, allocation policies, Time-Shared and Space-Shared are used for scheduling the cloudlets and compared with the constraints (metrics like total execution time, a number of resources and resource allocation algorithm. CloudSim has been used for simulations and the result of simulation demonstrate that Resource Management is effective.

  15. My Approaches to Promote Teaching Efficiency

    Institute of Scientific and Technical Information of China (English)

    李娟维

    2014-01-01

    Promoting teaching efficiency is a matter of the utmost concern of all teachers. This paper focuses on the different approaches that the author adopts in her teaching. These approaches include creating learning situations,using humor in teaching.

  16. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  17. A computationally efficient fuzzy control s

    Directory of Open Access Journals (Sweden)

    Abdel Badie Sharkawy

    2013-12-01

    Full Text Available This paper develops a decentralized fuzzy control scheme for MIMO nonlinear second order systems with application to robot manipulators via a combination of genetic algorithms (GAs and fuzzy systems. The controller for each degree of freedom (DOF consists of a feedforward fuzzy torque computing system and a feedback fuzzy PD system. The feedforward fuzzy system is trained and optimized off-line using GAs, whereas not only the parameters but also the structure of the fuzzy system is optimized. The feedback fuzzy PD system, on the other hand, is used to keep the closed-loop stable. The rule base consists of only four rules per each DOF. Furthermore, the fuzzy feedback system is decentralized and simplified leading to a computationally efficient control scheme. The proposed control scheme has the following advantages: (1 it needs no exact dynamics of the system and the computation is time-saving because of the simple structure of the fuzzy systems and (2 the controller is robust against various parameters and payload uncertainties. The computational complexity of the proposed control scheme has been analyzed and compared with previous works. Computer simulations show that this controller is effective in achieving the control goals.

  18. Computational approaches for drug discovery.

    Science.gov (United States)

    Hung, Che-Lun; Chen, Chi-Chun

    2014-09-01

    Cellular proteins are the mediators of multiple organism functions being involved in physiological mechanisms and disease. By discovering lead compounds that affect the function of target proteins, the target diseases or physiological mechanisms can be modulated. Based on knowledge of the ligand-receptor interaction, the chemical structures of leads can be modified to improve efficacy, selectivity and reduce side effects. One rational drug design technology, which enables drug discovery based on knowledge of target structures, functional properties and mechanisms, is computer-aided drug design (CADD). The application of CADD can be cost-effective using experiments to compare predicted and actual drug activity, the results from which can used iteratively to improve compound properties. The two major CADD-based approaches are structure-based drug design, where protein structures are required, and ligand-based drug design, where ligand and ligand activities can be used to design compounds interacting with the protein structure. Approaches in structure-based drug design include docking, de novo design, fragment-based drug discovery and structure-based pharmacophore modeling. Approaches in ligand-based drug design include quantitative structure-affinity relationship and pharmacophore modeling based on ligand properties. Based on whether the structure of the receptor and its interaction with the ligand are known, different design strategies can be seed. After lead compounds are generated, the rule of five can be used to assess whether these have drug-like properties. Several quality validation methods, such as cost function analysis, Fisher's cross-validation analysis and goodness of hit test, can be used to estimate the metrics of different drug design strategies. To further improve CADD performance, multi-computers and graphics processing units may be applied to reduce costs.

  19. Computational Approaches to Vestibular Research

    Science.gov (United States)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  20. Fuzzy multiple linear regression: A computational approach

    Science.gov (United States)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  1. Computational approach to Riemann surfaces

    CERN Document Server

    Klein, Christian

    2011-01-01

    This volume offers a well-structured overview of existent computational approaches to Riemann surfaces and those currently in development. The authors of the contributions represent the groups providing publically available numerical codes in this field. Thus this volume illustrates which software tools are available and how they can be used in practice. In addition examples for solutions to partial differential equations and in surface theory are presented. The intended audience of this book is twofold. It can be used as a textbook for a graduate course in numerics of Riemann surfaces, in which case the standard undergraduate background, i.e., calculus and linear algebra, is required. In particular, no knowledge of the theory of Riemann surfaces is expected; the necessary background in this theory is contained in the Introduction chapter. At the same time, this book is also intended for specialists in geometry and mathematical physics applying the theory of Riemann surfaces in their research. It is the first...

  2. 'Lean' approach gives greater efficiency.

    Science.gov (United States)

    Call, Roger

    2014-02-01

    Adapting the 'Lean' methodologies used for many years by many manufacturers on the production line - such as in the automotive industry - and deploying them in healthcare 'spaces' can, Roger Call, an architect at Herman Miller Healthcare in the US, argues, 'easily remedy many of the inefficiencies' found within a healthcare facility. In an article that first appeared in the September 2013 issue of The Australian Hospital Engineer, he explains how 'Lean' approaches such as the 'Toyota production system', and 'Six Sigma', can be harnessed to good effect in the healthcare sphere.

  3. A Systems Approach to High Performance Buildings: A Computational Systems Engineering R&D Program to Increase DoD Energy Efficiency

    Science.gov (United States)

    2012-02-01

    Screening 3.1.1 Objectives and Background 3.1.1a) Background: Building Energy Efficiency Retrofit Process The key steps (see Figure 3.1.1) in the...current building energy efficiency retrofit, include 1) Facility Audit to collect building information such as: Building type (climate, usage...building. To further benefit the performance of the building, tools were developed for tractable design optimization which trades off building energy efficiency and

  4. Efficient quantum circuits for one-way quantum computing.

    Science.gov (United States)

    Tanamoto, Tetsufumi; Liu, Yu-Xi; Hu, Xuedong; Nori, Franco

    2009-03-13

    While Ising-type interactions are ideal for implementing controlled phase flip gates in one-way quantum computing, natural interactions between solid-state qubits are most often described by either the XY or the Heisenberg models. We show an efficient way of generating cluster states directly using either the imaginary SWAP (iSWAP) gate for the XY model, or the sqrt[SWAP] gate for the Heisenberg model. Our approach thus makes one-way quantum computing more feasible for solid-state devices.

  5. Energy Efficiency in Computing (1/2)

    CERN Document Server

    CERN. Geneva

    2016-01-01

    As manufacturers improve the silicon process, truly low energy computing is becoming a reality - both in servers and in the consumer space. This series of lectures covers a broad spectrum of aspects related to energy efficient computing - from circuits to datacentres. We will discuss common trade-offs and basic components, such as processors, memory and accelerators. We will also touch on the fundamentals of modern datacenter design and operation. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic L...

  6. Improving the efficiency of abdominal aortic aneurysm wall stress computations.

    Science.gov (United States)

    Zelaya, Jaime E; Goenezen, Sevan; Dargon, Phong T; Azarbal, Amir-Farzin; Rugonyi, Sandra

    2014-01-01

    An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses.

  7. Characterizing and Implementing Efficient Primitives for Privacy-Preserving Computation

    Science.gov (United States)

    2015-07-01

    4 Efficient Mobile Oblivious Computation ( EMOC ) .................................................................... 4 Memory...Assumptions and Procedures  Efficient Mobile Oblivious Computation ( EMOC )  Mobile applications increasingly require users to surrender private...In this effort, we developed Efficient Mobile Oblivious Computation ( EMOC ), a set of SFE protocols customized for the mobile platform. Using

  8. An efficient approach for reliability-based topology optimization

    Science.gov (United States)

    Kanakasabai, Pugazhendhi; Dhingra, Anoop K.

    2016-01-01

    This article presents an efficient approach for reliability-based topology optimization (RBTO) in which the computational effort involved in solving the RBTO problem is equivalent to that of solving a deterministic topology optimization (DTO) problem. The methodology presented is built upon the bidirectional evolutionary structural optimization (BESO) method used for solving the deterministic optimization problem. The proposed method is suitable for linear elastic problems with independent and normally distributed loads, subjected to deflection and reliability constraints. The linear relationship between the deflection and stiffness matrices along with the principle of superposition are exploited to handle reliability constraints to develop an efficient algorithm for solving RBTO problems. Four example problems with various random variables and single or multiple applied loads are presented to demonstrate the applicability of the proposed approach in solving RBTO problems. The major contribution of this article comes from the improved efficiency of the proposed algorithm when measured in terms of the computational effort involved in the finite element analysis runs required to compute the optimum solution. For the examples presented with a single applied load, it is shown that the CPU time required in computing the optimum solution for the RBTO problem is 15-30% less than the time required to solve the DTO problems. The improved computational efficiency allows for incorporation of reliability considerations in topology optimization without an increase in the computational time needed to solve the DTO problem.

  9. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  10. Energy Efficiency in Computing (2/2)

    CERN Document Server

    CERN. Geneva

    2016-01-01

    We will start the second day of our energy efficient computing series with a brief discussion of software and the impact it has on energy consumption. A second major point of this lecture will be the current state of research and a few future technologies, ranging from mainstream (e.g. the Internet of Things) to exotic. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic Lectures are recorded. No webcast! Because of a problem of the recording equipment, this lecture will be repeated for recording pu...

  11. Computer Architecture A Quantitative Approach

    CERN Document Server

    Hennessy, John L

    2011-01-01

    The computing world today is in the middle of a revolution: mobile clients and cloud computing have emerged as the dominant paradigms driving programming and hardware innovation today. The Fifth Edition of Computer Architecture focuses on this dramatic shift, exploring the ways in which software and technology in the cloud are accessed by cell phones, tablets, laptops, and other mobile computing devices. Each chapter includes two real-world examples, one mobile and one datacenter, to illustrate this revolutionary change.Updated to cover the mobile computing revolutionEmphasizes the two most im

  12. Efficient Protocols for Principal Eigenvector Computation over Private Data

    Directory of Open Access Journals (Sweden)

    Manas A. Pathak

    2011-12-01

    Full Text Available In this paper we present a protocol for computing the principal eigenvector of a collection of data matrices belonging to multiple semi-honest parties with privacy constraints. Our proposed protocol is based on secure multi-party computation with a semi-honest arbitrator who deals with data encrypted by the other parties using an additive homomorphic cryptosystem. We augment the protocol with randomization and oblivious transfer to make it difficult for any party to estimate properties of the data belonging to other parties from the intermediate steps. The previous approaches towards this problem were based on expensive QR decomposition of correlation matrices, we present an efficient algorithm using the power iteration method. We present an analysis of the correctness, security, and efficiency of protocol along with experimental results using a prototype implementation over simulated data and USPS handwritten digits dataset.

  13. Computationally efficient Bayesian inference for inverse problems.

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.

    2007-10-01

    Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.

  14. A monomial chaos approach for efficient uncertainty quantification on nonlinear problems

    NARCIS (Netherlands)

    Witteveen, J.A.S.; Bijl, H.

    2008-01-01

    A monomial chaos approach is presented for efficient uncertainty quantification in nonlinear computational problems. Propagating uncertainty through nonlinear equations can be computationally intensive for existing uncertainty quantification methods. It usually results in a set of nonlinear equation

  15. A monomial chaos approach for efficient uncertainty quantification on nonlinear problems

    NARCIS (Netherlands)

    Witteveen, J.A.S.; Bijl, H.

    2008-01-01

    A monomial chaos approach is presented for efficient uncertainty quantification in nonlinear computational problems. Propagating uncertainty through nonlinear equations can be computationally intensive for existing uncertainty quantification methods. It usually results in a set of nonlinear

  16. Numerical aspects for efficient welding computational mechanics

    Directory of Open Access Journals (Sweden)

    Aburuga Tarek Kh.S.

    2014-01-01

    Full Text Available The effect of the residual stresses and strains is one of the most important parameter in the structure integrity assessment. A finite element model is constructed in order to simulate the multi passes mismatched submerged arc welding SAW which used in the welded tensile test specimen. Sequentially coupled thermal mechanical analysis is done by using ABAQUS software for calculating the residual stresses and distortion due to welding. In this work, three main issues were studied in order to reduce the time consuming during welding simulation which is the major problem in the computational welding mechanics (CWM. The first issue is dimensionality of the problem. Both two- and three-dimensional models are constructed for the same analysis type, shell element for two dimension simulation shows good performance comparing with brick element. The conventional method to calculate residual stress is by using implicit scheme that because of the welding and cooling time is relatively high. In this work, the author shows that it could use the explicit scheme with the mass scaling technique, and time consuming during the analysis will be reduced very efficiently. By using this new technique, it will be possible to simulate relatively large three dimensional structures.

  17. Computer Algebra, Instrumentation and the Anthropological Approach

    Science.gov (United States)

    Monaghan, John

    2007-01-01

    This article considers research and scholarship on the use of computer algebra in mathematics education following the instrumentation and the anthropological approaches. It outlines what these approaches are, positions them with regard to other approaches, examines tensions between the two approaches and makes suggestions for how work in this…

  18. Energy based Efficient Resource Scheduling in Green Computing

    Directory of Open Access Journals (Sweden)

    B.Vasumathi,

    2015-11-01

    Full Text Available Cloud Computing is an evolving area of efficient utilization of computing resources. Data centers accommodating Cloud applications ingest massive quantities of energy, contributing to high functioning expenditures and carbon footprints to the atmosphere. Hence, Green Cloud computing resolutions are required not only to save energy for the environment but also to decrease operating charges. In this paper, we emphasis on the development of energy based resource scheduling framework and present an algorithm that consider the synergy between various data center infrastructures (i.e., software, hardware, etc., and performance. In specific, this paper proposes (a architectural principles for energy efficient management of Clouds; (b energy efficient resource allocation strategies and scheduling algorithm considering Quality of Service (QoS outlooks. The performance of the proposed algorithm has been evaluated with the existing energy based scheduling algorithms. The experimental results demonstrate that this approach is effective in minimizing the cost and energy consumption of Cloud applications thus moving towards the achievement of Green Clouds.

  19. Computational approaches for urban environments

    NARCIS (Netherlands)

    Helbich, M; Jokar Arsanjani, J; Leitner, M

    2015-01-01

    This book aims to promote the synergistic usage of advanced computational methodologies in close relationship to geospatial information across cities of different scales. A rich collection of chapters subsumes current research frontiers originating from disciplines such as geography, urban planning,

  20. What is computation : An epistemic approach

    NARCIS (Netherlands)

    Wiedermann, Jiří; van Leeuwen, Jan

    2015-01-01

    Traditionally, computations are seen as processes that transform information. Definitions of computation subsequently concentrate on a description of the mechanisms that lead to such processes. The bottleneck of this approach is twofold. First, it leads to a definition of computation that is too

  1. What is computation : An epistemic approach

    NARCIS (Netherlands)

    Wiedermann, Jiří; van Leeuwen, Jan

    2015-01-01

    Traditionally, computations are seen as processes that transform information. Definitions of computation subsequently concentrate on a description of the mechanisms that lead to such processes. The bottleneck of this approach is twofold. First, it leads to a definition of computation that is too bro

  2. Efficient Minimum-Phase Prefilter Computation Using Fast QL-Factorization

    DEFF Research Database (Denmark)

    Hansen, Morten; Christensen, Lars P.B.

    2009-01-01

    This paper presents a novel approach for computing both the minimum-phase filter and the associated all-pass filter in a computationally efficient way using the fast QL-factorization. A desirable property of this approach is that the complexity is independent on the size of the matrix which is QL...

  3. Antenna arrays a computational approach

    CERN Document Server

    Haupt, Randy L

    2010-01-01

    This book covers a wide range of antenna array topics that are becoming increasingly important in wireless applications, particularly in design and computer modeling. Signal processing and numerical modeling algorithms are explored, and MATLAB computer codes are provided for many of the design examples. Pictures of antenna arrays and components provided by industry and government sources are presented with explanations of how they work. Antenna Arrays is a valuable reference for practicing engineers and scientists in wireless communications, radar, and remote sensing, and an excellent textbook for advanced antenna courses.

  4. Efficient quantum computing using coherent photon conversion.

    Science.gov (United States)

    Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A

    2011-10-12

    Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting

  5. Energy Efficiency Approach to Intelligent Building

    Directory of Open Access Journals (Sweden)

    Gitanjali Birangal

    2015-07-01

    Full Text Available Energy efficiency has nowadays become one of the most challenging tasks and this has boosted research on fresh fields, such as Ambient Intelligence. Energy consumption in the housing and tertiary sectors is especially high in developed countries. There is a great potential for energy savings in these sectors. Energy conservation measures are developed for newly constructed buildings and for buildings under restoration. However, to achieve a significant diminution in energy consumption apart from the standard energy-efficiency methods, pioneering technologies should be implemented, including renewable energy. Now, buildings are increasingly anticipated to meet higher and more complex performance requirements. Among these requirements, energy efficiency is renowned as an international goal to promote energy sustainability. Different approaches have been adapted to concentrate on this goal, the most up to date relating consumption patterns with human occupancy. Energy efficiency is keywords that can be originate these days in all domains in which energy demand exists. A significant aspect that can improve the energy efficiency in buildings is the use of building automation systems. Alternatively, building automation systems are usually not considered for energy conservation, as they are mostly used for comfort and safety. This consistently causes immense problems due to an fruitless use of these systems and unawareness of energy consumption. It is therefore essential that the existing system solutions are adapted to focus on energy conservation. Our research approach in developing an intelligent system to improve energy efficiency in intelligent buildings, which takes into account the different technical infrastructures of building

  6. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  7. Quantum-enhanced Sensing and Efficient Quantum Computation

    Science.gov (United States)

    2015-07-27

    Quantum -enhanced sensing and efficient quantum computation Ian Walmsley THE UNIVERSITY OF...COVERED (From - To) 1 February 2013 - 31 January 2015 4. TITLE AND SUBTITLE Quantum -enhanced sensing and efficient quantum computation 5a. CONTRACT...1895616013 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 Final report for “ Quantum ‐Enhanced Sensing and Efficient  Quantum   Computation

  8. GRID COMPUTING AND CHECKPOINT APPROACH

    Directory of Open Access Journals (Sweden)

    Pankaj gupta

    2011-05-01

    Full Text Available Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation or problem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. In this paper we investigate the different techniques of fault tolerance which are used in many real time distributed systems. The main focus is on types of fault occurring in the system, fault detection techniques and the recovery techniques used. A fault can occur due to link failure, resource failure or by any other reason is to be tolerated for working the system smoothly and accurately. These faults can be detected and recovered by many techniques used accordingly. An appropriate fault detector can avoid loss due to system crash and reliable fault tolerance technique can save from system failure. This paper provides how these methods are applied to detect and tolerate faults from various Real Time Distributed Systems. The advantages of utilizing the check pointing functionality are obvious; however so far the Grid community has notdeveloped a widely accepted standard that would allow the Gridenvironment to consciously utilize low level check pointing packages.Therefore, such a standard named Grid Check pointing Architecture isbeing designed. The fault tolerance mechanism used here sets the jobcheckpoints based on the resource failure rate. If resource failureoccurs, the job is restarted from its last successful state using acheckpoint file from another grid resource. A critical aspect for anautomatic recovery is the availability of checkpoint files. A strategy to increase the availability of checkpoints is replication. Grid is a form distributed computing mainly to virtualizes and utilize geographically distributed idle resources. A grid is a distributed computational and storage environment often composed of

  9. Immune based computer virus detection approaches

    Institute of Scientific and Technical Information of China (English)

    TAN Ying; ZHANG Pengtao

    2013-01-01

    The computer virus is considered one of the most horrifying threats to the security of computer systems worldwide.The rapid development of evasion techniques used in virus causes the signature based computer virus detection techniques to be ineffective.Many novel computer virus detection approaches have been proposed in the past to cope with the ineffectiveness,mainly classified into three categories:static,dynamic and heuristics techniques.As the natural similarities between the biological immune system (BIS),computer security system (CSS),and the artificial immune system (AIS) were all developed as a new prototype in the community of anti-virus research.The immune mechanisms in the BIS provide the opportunities to construct computer virus detection models that are robust and adaptive with the ability to detect unseen viruses.In this paper,a variety of classic computer virus detection approaches were introduced and reviewed based on the background knowledge of the computer virus history.Next,a variety of immune based computer virus detection approaches were also discussed in detail.Promising experimental results suggest that the immune based computer virus detection approaches were able to detect new variants and unseen viruses at lower false positive rates,which have paved a new way for the anti-virus research.

  10. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their

  11. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their ef

  12. Role of computational efficiency in process simulation

    Directory of Open Access Journals (Sweden)

    Kurt Strand

    1989-07-01

    Full Text Available It is demonstrated how efficient numerical algorithms may be combined to yield a powerful environment for analysing and simulating dynamic systems. The importance of using efficient numerical algorithms is emphasized and demonstrated through examples from the petrochemical industry.

  13. Computational approaches for systems metabolomics.

    Science.gov (United States)

    Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J

    2016-06-01

    Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field.

  14. Study of Efficient Utilization of Power using green Computing

    OpenAIRE

    Ms .Dheera Jadhwani, Mr.Mayur Agrawal, Mr.Hemant Mande

    2012-01-01

    Green computing or green IT, basically concerns toenvironmentally sustainable computing or IT. Thefield of green computing is defined as "theknowledge and practice of designing,manufacturing, using, and disposing of computers,servers, and associated subsystems—which includeprinters, monitors, and networking, storage devicesand communications systems—efficiently andeffectively with minimal or no impact on theenvironment. this computing is similar to greenchemistry that is minimum utilization o...

  15. Learning and geometry computational approaches

    CERN Document Server

    Smith, Carl

    1996-01-01

    The field of computational learning theory arose out of the desire to for­ mally understand the process of learning. As potential applications to artificial intelligence became apparent, the new field grew rapidly. The learning of geo­ metric objects became a natural area of study. The possibility of using learning techniques to compensate for unsolvability provided an attraction for individ­ uals with an immediate need to solve such difficult problems. Researchers at the Center for Night Vision were interested in solving the problem of interpreting data produced by a variety of sensors. Current vision techniques, which have a strong geometric component, can be used to extract features. However, these techniques fall short of useful recognition of the sensed objects. One potential solution is to incorporate learning techniques into the geometric manipulation of sensor data. As a first step toward realizing such a solution, the Systems Research Center at the University of Maryland, in conjunction with the C...

  16. Holistic Approach to Data Center Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Steven W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-18

    This presentation discusses NREL's Energy System Integrations Facility and NREL's holistic design approach to sustainable data centers that led to the world's most energy-efficient data center. It describes Peregrine, a warm water liquid cooled supercomputer, waste heat reuse in the data center, demonstrated PUE and ERE, and lessons learned during four years of operation.

  17. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Xiao [ORNL; Dong, Jin [ORNL; Djouadi, Seddik M [ORNL; Nutaro, James J [ORNL; Kuruganti, Teja [ORNL

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.

  18. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  19. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    Energy Technology Data Exchange (ETDEWEB)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  20. Efficient Parallel Engineering Computing on Linux Workstations

    Science.gov (United States)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  1. An efficient approach for feature-preserving mesh denoising

    Science.gov (United States)

    Lu, Xuequan; Liu, Xiaohong; Deng, Zhigang; Chen, Wenzhi

    2017-03-01

    With the growing availability of various optical and laser scanners, it is easy to capture different kinds of mesh models which are inevitably corrupted with noise. Although many mesh denoising methods proposed in recent years can produce encouraging results, most of them still suffer from their computational efficiencies. In this paper, we propose a highly efficient approach for mesh denoising while preserving geometric features. Specifically, our method consists of three steps: initial vertex filtering, normal estimation, and vertex update. At the initial vertex filtering step, we introduce a fast iterative vertex filter to substantially reduce noise interference. With the initially filtered mesh from the above step, we then estimate face and vertex normals: an unstandardized bilateral filter to efficiently smooth face normals, and an efficient scheme to estimate vertex normals with the filtered face normals. Finally, at the vertex update step, by utilizing both the filtered face normals and estimated vertex normals obtained from the previous step, we propose a novel iterative vertex update algorithm to efficiently update vertex positions. The qualitative and quantitative comparisons show that our method can outperform the selected state of the art methods, in particular, its computational efficiency (up to about 32 times faster).

  2. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    Science.gov (United States)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  3. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying

    2014-11-07

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  4. Materials Approach to Fuel Efficient Tires

    Energy Technology Data Exchange (ETDEWEB)

    Votruba-Drzal, Peter [PPG Industries, Monroeville, PA (United States); Kornish, Brian [PPG Industries, Monroeville, PA (United States)

    2015-06-30

    The objective of this project was to design, develop, and demonstrate fuel efficient and safety regulation compliant tire filler and barrier coating technologies that will improve overall fuel efficiency by at least 2%. The program developed and validated two complementary approaches to improving fuel efficiency through tire improvements. The first technology was a modified silica-based product that is 15% lower in cost and/or enables a 10% improvement in tread wear while maintaining the already demonstrated minimum of 2% improvement in average fuel efficiency. The second technology was a barrier coating with reduced oxygen transmission rate compared to the state-of-the-art halobutyl rubber inner liners that will provide extended placarded tire pressure retention at significantly reduced material usage. A lower-permeance, thinner inner liner coating which retains tire pressure was expected to deliver the additional 2% reduction in fleet fuel consumption. From the 2006 Transportation Research Board Report1, a 10 percent reduction in rolling resistance can reduce consumer fuel expenditures by 1 to 2 percent for typical vehicles. This savings is equivalent to 6 to 12 gallons per year. A 1 psi drop in inflation pressure increases the tire's rolling resistance by about 1.4 percent.

  5. Toward exascale computing through neuromorphic approaches.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  6. Cell sorting using efficient light shaping approaches

    DEFF Research Database (Denmark)

    Banas, Andrew; Palima, Darwin; Villangca, Mark Jayson;

    2016-01-01

    distributions aimed at the positions of the detected cells. Furthermore, the beam shaping freedom provided by GPC can allow optimizations in the beam’s propagation and its interaction with the catapulted cells. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading...... is gentler, less invasive and more economical compared to conventional FACS systems. As cells are less responsive to plastic or glass beads commonly used in the optical manipulation literature, and since laser safety would be an issue in clinical use, we develop efficient approaches in utilizing lasers...... and light modulation devices. The Generalized Phase Contrast (GPC) method that can be used for efficiently illuminating spatial light modulators or creating well-defined contiguous optical traps is supplemented by diffractive techniques capable of integrating the available light and creating 2D or 3D beam...

  7. Multiscale approaches to high efficiency photovoltaics

    Directory of Open Access Journals (Sweden)

    Connolly James Patrick

    2016-01-01

    Full Text Available While renewable energies are achieving parity around the globe, efforts to reach higher solar cell efficiencies becomes ever more difficult as they approach the limiting efficiency. The so-called third generation concepts attempt to break this limit through a combination of novel physical processes and new materials and concepts in organic and inorganic systems. Some examples of semi-empirical modelling in the field are reviewed, in particular for multispectral solar cells on silicon (French ANR project MultiSolSi. Their achievements are outlined, and the limits of these approaches shown. This introduces the main topic of this contribution, which is the use of multiscale experimental and theoretical techniques to go beyond the semi-empirical understanding of these systems. This approach has already led to great advances at modelling which have led to modelling software, which is widely known. Yet, a survey of the topic reveals a fragmentation of efforts across disciplines, firstly, such as organic and inorganic fields, but also between the high efficiency concepts such as hot carrier cells and intermediate band concepts. We show how this obstacle to the resolution of practical research obstacles may be lifted by inter-disciplinary cooperation across length scales, and across experimental and theoretical fields, and finally across materials systems. We present a European COST Action “MultiscaleSolar” kicking off in early 2015, which brings together experimental and theoretical partners in order to develop multiscale research in organic and inorganic materials. The goal of this defragmentation and interdisciplinary collaboration is to develop understanding across length scales, which will enable the full potential of third generation concepts to be evaluated in practise, for societal and industrial applications.

  8. Computationally efficient prediction of area per lipid

    DEFF Research Database (Denmark)

    Chaban, Vitaly V.

    2014-01-01

    Area per lipid (APL) is an important property of biological and artificial membranes. Newly constructed bilayers are characterized by their APL and newly elaborated force fields must reproduce APL. Computer simulations of APL are very expensive due to slow conformational dynamics. The simulated d....... Thus, sampling times to predict accurate APL are reduced by a factor of 10. (C) 2014 Elsevier B.V. All rights reserved....

  9. COMPUTATIONALLY EFFICIENT ESPRIT METHOD FOR DIRECTION FINDING

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, a low complexity ESPRIT algorithm based on power method and Orthogonal-triangular (QR) decomposition is presented for direction finding, which does not require a priori knowledge of source number and the predetermined threshold (separates the signal and noise eigen-values). Firstly, according to the estimation of noise subspace obtained by the power method, a novel source number detection method without eigen-decomposition is proposed based on QR decomposition. Furthermore, the eigenvectors of signal subspace can be determined according to Q matrix,and then the directions of signals could be computed by the ESPRIT algorithm. To determine the source number and subspace, the computation complexity of the proposed algorithm is approximated as (2log2 n+ 2.67)M3, where n is the power of covariance matrix and M is the number of array elements. Compared with the Single Vector Decomposition (SVD) based algorithm, it has a substantial computational saving with the approximation performance. The simulation results demonstrate its effectiveness and robustness.

  10. An Efficient Speedup Strategy for Constant Sum Game Computations

    Directory of Open Access Journals (Sweden)

    Alexandru-Ioan STAN

    2014-01-01

    Full Text Available Large classes of game theoretic problems seem to defy attempts of finding polynomial-time al-gorithms while analyzing large amounts of data. This premise leads naturally to the possibility of using efficient parallel computing implementations when seeking exact solutions to some of these problems. Although alpha beta algorithms for more than one-player game-tree searches show moderate parallel performance, this paper sets forth an alpha beta strategy enhanced with transposition tables in order to offer satisfactory speedups on high performance servers. When the access to the transposition tables is done in low constant delay time, the achieved speedups should approach the theoretical upper bounds of the code parallelism. We tested the strategy on a well-known combinatorial game.

  11. Efficient Associative Computation with Discrete Synapses.

    Science.gov (United States)

    Knoblauch, Andreas

    2016-01-01

    Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about n(2)/k memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most C = 0.72 bits per synapse. Willshaw networks can store a much smaller number of about n(2)/k(2) memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor ζ close to one, the same number of memories as for optimized Hopfield-type learning--for example, ζ = 0.64 for binary synapses, ζ = 0.88 for 2 bit (four-state) synapses, ζ = 0.96 for 3 bit (8-state) synapses, and ζ > 0.99 for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store C(I) = 1 bit per computer bit and up to C(S) = log n bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model.

  12. Efficient Tate pairing computation using double-base chains

    Institute of Scientific and Technical Information of China (English)

    ZHAO ChangAn; ZHANG FangGuo; HUANG JiWu

    2008-01-01

    Pairing-based cryptosystems have developed very fast in the last few years. The effi-ciencies of these cryptosystems depend on the computation of the bilinear pairings. In this paper, a new efficient algorithm based on double-base chains for computing the Tate pairing is proposed for odd characteristic p > 3. The inherent sparseness of double-base number system reduces the computational cost for computing the Tate pairing evidently. The new algorithm is 9% faster than the previous fastest method for the embedding degree k = 6.

  13. Cell sorting using efficient light shaping approaches

    Science.gov (United States)

    Bañas, Andrew; Palima, Darwin; Villangca, Mark; Glückstad, Jesper

    2016-03-01

    Early detection of diseases can save lives. Hence, there is emphasis in sorting rare disease-indicating cells within small dilute quantities such as in the confines of lab-on-a-chip devices. In our work, we use optical forces to isolate red blood cells detected by machine vision. This approach is gentler, less invasive and more economical compared to conventional FACS systems. As cells are less responsive to plastic or glass beads commonly used in the optical manipulation literature, and since laser safety would be an issue in clinical use, we develop efficient approaches in utilizing lasers and light modulation devices. The Generalized Phase Contrast (GPC) method that can be used for efficiently illuminating spatial light modulators or creating well-defined contiguous optical traps is supplemented by diffractive techniques capable of integrating the available light and creating 2D or 3D beam distributions aimed at the positions of the detected cells. Furthermore, the beam shaping freedom provided by GPC can allow optimizations in the beam's propagation and its interaction with the catapulted cells.

  14. An efficient numerical approach to electrostatic microelectromechanical system simulation

    Institute of Scientific and Technical Information of China (English)

    Li Pu

    2009-01-01

    Computational analysis of electrostatic microelectromechanical systems (MEMS) requires an electrostatic analysis to compute the electrostatic forces acting on micromechanical structures and a mechanical analysis to compute the deformation of micromechanical structures. Typically, the mechanical analysis is performed on an undeformed geometry. However, the electrostatic analysis is performed on the deformed position of microstructures. In this paper, a new efficient approach to self-consistent analysis of electrostatic MEMS in the small deformation case is presented. In this approach, when the microstructures undergo small deformations, the surface charge densities on the deformed geometry can be computed without updating the geometry of the microstructures. This algorithm is based on the linear mode shapes of a microstructure as basis functions. A boundary integral equation for the electrostatic problem is expanded into a Taylor series around the undeformed configuration, and a new coupled-field equation is presented. This approach is validated by comparing its results with the results available in the literature and ANSYS solutions, and shows attractive features comparable to ANSYS.

  15. A Big Data Approach to Computational Creativity

    CERN Document Server

    Varshney, Lav R; Varshney, Kush R; Bhattacharjya, Debarun; Schoergendorfer, Angela; Chee, Yi-Min

    2013-01-01

    Computational creativity is an emerging branch of artificial intelligence that places computers in the center of the creative process. Broadly, creativity involves a generative step to produce many ideas and a selective step to determine the ones that are the best. Many previous attempts at computational creativity, however, have not been able to achieve a valid selective step. This work shows how bringing data sources from the creative domain and from hedonic psychophysics together with big data analytics techniques can overcome this shortcoming to yield a system that can produce novel and high-quality creative artifacts. Our data-driven approach is demonstrated through a computational creativity system for culinary recipes and menus we developed and deployed, which can operate either autonomously or semi-autonomously with human interaction. We also comment on the volume, velocity, variety, and veracity of data in computational creativity.

  16. Towards Lagrangian approach to quantum computations

    CERN Document Server

    Vlasov, A Yu

    2003-01-01

    In this work is discussed possibility and actuality of Lagrangian approach to quantum computations. Finite-dimensional Hilbert spaces used in this area provide some challenge for such consideration. The model discussed here can be considered as an analogue of Weyl quantization of field theory via path integral in L. D. Faddeev's approach. Weyl quantization is possible to use also in finite-dimensional case, and some formulas may be simply rewritten with change of integrals to finite sums. On the other hand, there are specific difficulties relevant to finite case. This work has some allusions with phase space models of quantum computations developed last time by different authors.

  17. Skyline View: Efficient Distributed Subspace Skyline Computation

    Science.gov (United States)

    Kim, Jinhan; Lee, Jongwuk; Hwang, Seung-Won

    Skyline queries have gained much attention as alternative query semantics with pros (e.g.low query formulation overhead) and cons (e.g.large control over result size). To overcome the cons, subspace skyline queries have been recently studied, where users iteratively specify relevant feature subspaces on search space. However, existing works mainly focuss on centralized databases. This paper aims to extend subspace skyline computation to distributed environments such as the Web, where the most important issue is to minimize the cost of accessing vertically distributed objects. Toward this goal, we exploit prior skylines that have overlapped subspaces to the given subspace. In particular, we develop algorithms for three scenarios- when the subspace of prior skylines is superspace, subspace, or the rest. Our experimental results validate that our proposed algorithm shows significantly better performance than the state-of-the-art algorithms.

  18. A computable expression of closure to efficient causation.

    Science.gov (United States)

    Mossio, Matteo; Longo, Giuseppe; Stewart, John

    2009-04-07

    In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.

  19. Efficient Computation of Optimal Trading Strategies

    CERN Document Server

    Boyarshinov, Victor

    2010-01-01

    Given the return series for a set of instruments, a \\emph{trading strategy} is a switching function that transfers wealth from one instrument to another at specified times. We present efficient algorithms for constructing (ex-post) trading strategies that are optimal with respect to the total return, the Sterling ratio and the Sharpe ratio. Such ex-post optimal strategies are useful analysis tools. They can be used to analyze the "profitability of a market" in terms of optimal trading; to develop benchmarks against which real trading can be compared; and, within an inductive framework, the optimal trades can be used to to teach learning systems (predictors) which are then used to identify future trading opportunities.

  20. Study of Efficient Utilization of Power using green Computing

    Directory of Open Access Journals (Sweden)

    Ms .Dheera Jadhwani, Mr.Mayur Agrawal, Mr.Hemant Mande

    2012-12-01

    Full Text Available Green computing or green IT, basically concerns toenvironmentally sustainable computing or IT. Thefield of green computing is defined as "theknowledge and practice of designing,manufacturing, using, and disposing of computers,servers, and associated subsystems—which includeprinters, monitors, and networking, storage devicesand communications systems—efficiently andeffectively with minimal or no impact on theenvironment. this computing is similar to greenchemistry that is minimum utilization of hazardousmaterials and , maximizing energy efficiencyduring the product's lifetime, and also promote therecyclability or biodegradability of defunct productsand factory waste .

  1. Earthquake detection through computationally efficient similarity search

    Science.gov (United States)

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  2. Secure Computation, I/O-Efficient Algorithms and Distributed Signatures

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Kölker, Jonas; Toft, Tomas

    2012-01-01

    adversary corrupting a constant fraction of the players and servers. Using packed secret sharing, the data can be stored in a compact way but will only be accessible in a block-wise fashion. We explore the possibility of using I/O-efficient algorithms to nevertheless compute on the data as efficiently...

  3. Computer networking a top-down approach

    CERN Document Server

    Kurose, James

    2017-01-01

    Unique among computer networking texts, the Seventh Edition of the popular Computer Networking: A Top Down Approach builds on the author’s long tradition of teaching this complex subject through a layered approach in a “top-down manner.” The text works its way from the application layer down toward the physical layer, motivating readers by exposing them to important concepts early in their study of networking. Focusing on the Internet and the fundamentally important issues of networking, this text provides an excellent foundation for readers interested in computer science and electrical engineering, without requiring extensive knowledge of programming or mathematics. The Seventh Edition has been updated to reflect the most important and exciting recent advances in networking.

  4. RIOT: I/O-Efficient Numerical Computing without SQL

    CERN Document Server

    Zhang, Yi; Yang, Jun

    2009-01-01

    R is a numerical computing environment that is widely popular for statistical data analysis. Like many such environments, R performs poorly for large datasets whose sizes exceed that of physical memory. We present our vision of RIOT (R with I/O Transparency), a system that makes R programs I/O-efficient in a way transparent to the users. We describe our experience with RIOT-DB, an initial prototype that uses a relational database system as a backend. Despite the overhead and inadequacy of generic database systems in handling array data and numerical computation, RIOT-DB significantly outperforms R in many large-data scenarios, thanks to a suite of high-level, inter-operation optimizations that integrate seamlessly into R. While many techniques in RIOT are inspired by databases (and, for RIOT-DB, realized by a database system), RIOT users are insulated from anything database related. Compared with previous approaches that require users to learn new languages and rewrite their programs to interface with a datab...

  5. Cost Efficient Design Approach for Reversible Programmable Logic Arrays

    Directory of Open Access Journals (Sweden)

    Md. RiazurRahman

    2016-06-01

    Full Text Available Reversible programmable logicarrays (PLA are at the heart of designing of efficient low power computers. This paper presents anefficient approach to design Reversible PLAs that maximizes the usability of garbage outputs and also reduces the number of ancilla inputs generated. The designfor proposed essentialcomponents and the architecture of reversible grid network for designing AND and EX-OR planes are also presented. Several algorithms have been proposed and presented to describe the programming interfaces in context of Reversible PLAs construction. Lastly, recent result on the trade-off between cost factors of standard benchmarkcircuits shows that the proposed design clearly outperforms the existing ones in terms various cost factors.

  6. Hybrid soft computing approaches research and applications

    CERN Document Server

    Dutta, Paramartha; Chakraborty, Susanta

    2016-01-01

    The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis,  (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.

  7. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  8. Efficient computation of GW energy level corrections for molecules described in a plane wave basis

    Science.gov (United States)

    Rousseau, Bruno; Laflamme Janssen, Jonathan; Côté, Michel

    2013-03-01

    An efficient computational approach is presented to compute the ionisation energy and quasiparticle band gap at the level of the GW approximation when the Hilbert space is described in terms of plane waves. The method relies on ab initio calculations as a starting point. Then, the use of the Sternheimer equation eliminates slowly convergent sums on conduction states. Further, the Lanczos method is used to efficiently extract the most important eigenstates of the dielectric operator. This approach avoids the explicit computation of matrix elements of the dielectric operator in the plane wave basis, a crippling bottleneck of the brute force approach. The method is initially applied to organic molecules of current interest in the field of organic photovoltaics. Given the completeness of the plane wave basis, systematic convergence studies can be conducted. Furthermore, the method can readily be extended to describe polymers, which are also of interest for photovoltaic applications, but remain a significant computational challenge for methods based on localized basis sets.

  9. An evolutionary computational approach for the dynamic Stackelberg competition problems

    Directory of Open Access Journals (Sweden)

    Lorena Arboleda-Castro

    2016-06-01

    Full Text Available Stackelberg competition models are an important family of economical decision problems from game theory, in which the main goal is to find optimal strategies between two competitors taking into account their hierarchy relationship. Although these models have been widely studied in the past, it is important to note that very few works deal with uncertainty scenarios, especially those that vary over time. In this regard, the present research studies this topic and proposes a computational method for solving efficiently dynamic Stackelberg competition models. The computational experiments suggest that the proposed approach is effective for problems of this nature.

  10. Handbook of computational approaches to counterterrorism

    CERN Document Server

    Subrahmanian, VS

    2012-01-01

    Terrorist groups throughout the world have been studied primarily through the use of social science methods. However, major advances in IT during the past decade have led to significant new ways of studying terrorist groups, making forecasts, learning models of their behaviour, and shaping policies about their behaviour. Handbook of Computational Approaches to Counterterrorism provides the first in-depth look at how advanced mathematics and modern computing technology is shaping the study of terrorist groups. This book includes contributions from world experts in the field, and presents extens

  11. Computing with memory for energy-efficient robust systems

    CERN Document Server

    Paul, Somnath

    2013-01-01

    This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime.  The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are de

  12. A new approach to constructing efficient stiffly accurate EPIRK methods

    Science.gov (United States)

    Rainwater, G.; Tokman, M.

    2016-10-01

    The structural flexibility of the exponential propagation iterative methods of Runge-Kutta type (EPIRK) enables construction of particularly efficient exponential time integrators. While the EPIRK methods have been shown to perform well on stiff problems, all of the schemes proposed up to now have been derived using classical order conditions. In this paper we extend the stiff order conditions and the convergence theory developed for the exponential Rosenbrock methods to the EPIRK integrators. We derive stiff order conditions for the EPIRK methods and develop algorithms to solve them to obtain specific schemes. Moreover, we propose a new approach to constructing particularly efficient EPIRK integrators that are optimized to work with an adaptive Krylov algorithm. We use a set of numerical examples to illustrate the computational advantages that the newly constructed EPIRK methods offer compared to previously proposed exponential integrators.

  13. Computer science approach to quantum control

    Energy Technology Data Exchange (ETDEWEB)

    Janzing, D.

    2006-07-01

    Whereas it is obvious that every computation process is a physical process it has hardly been recognized that many complex physical processes bear similarities to computation processes. This is in particular true for the control of physical systems on the nanoscopic level: usually the system can only be accessed via a rather limited set of elementary control operations and for many purposes only a concatenation of a large number of these basic operations will implement the desired process. This concatenation is in many cases quite similar to building complex programs from elementary steps and principles for designing algorithm may thus be a paradigm for designing control processes. For instance, one can decrease the temperature of one part of a molecule by transferring its heat to the remaining part where it is then dissipated to the environment. But the implementation of such a process involves a complex sequence of electromagnetic pulses. This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. We show that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer. Likewise, the implementation of a heat engine on the nanoscale requires to process the heat in a way that is similar to information processing and it can be shown that heat engines with maximal efficiency would be powerful computers, too. In the same way as problems in computer science can be classified by complexity classes we can also classify control problems according to their complexity. Moreover, we directly relate these complexity classes for control problems to the classes in computer science. Unifying notions of complexity in computer science and physics has therefore two aspects: on the one hand, computer science methods help to analyze the complexity of physical processes. On the other hand, reasonable

  14. Novel computational approaches characterizing knee physiotherapy

    OpenAIRE

    Wangdo Kim; Veloso, Antonio P; Duarte Araujo; Kohles, Sean S.

    2014-01-01

    A knee joint’s longevity depends on the proper integration of structural components in an axial alignment. If just one of the components is abnormally off-axis, the biomechanical system fails, resulting in arthritis. The complexity of various failures in the knee joint has led orthopedic surgeons to select total knee replacement as a primary treatment. In many cases, this means sacrificing much of an otherwise normal joint. Here, we review novel computational approaches to describe knee physi...

  15. Advanced computational approaches to biomedical engineering

    CERN Document Server

    Saha, Punam K; Basu, Subhadip

    2014-01-01

    There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig

  16. Fast fundamental frequency estimation: Making a statistically efficient estimator computationally efficient

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2017-01-01

    estimator has a high computational complexity. In this paper, we propose an algorithm for lowering this complexity significantly by showing that the NLS estimator can be computed efficiently by solving two Toeplitz-plus-Hankel systems of equations and by exploiting the recursive-in-order matrix structures...

  17. Computational Approaches to Nucleic Acid Origami.

    Science.gov (United States)

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.

  18. A multidisciplinary approach to solving computer related vision problems.

    Science.gov (United States)

    Long, Jennifer; Helland, Magne

    2012-09-01

    This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.

  19. An accurate and efficient computation method of the hydration free energy of a large, complex molecule.

    Science.gov (United States)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-07

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  20. Efficient computation of bifurcation diagrams via adaptive ROMs

    Energy Technology Data Exchange (ETDEWEB)

    Terragni, F [Gregorio Millán Institute for Fluid Dynamics, Nanoscience and Industrial Mathematics, Universidad Carlos III de Madrid, E-28911 Leganés (Spain); Vega, J M, E-mail: fterragn@ing.uc3m.es [E.T.S.I. Aeronáuticos, Universidad Politécnica de Madrid, E-28040 Madrid (Spain)

    2014-08-01

    Various ideas concerning model reduction based on proper orthogonal decomposition are discussed, exploited, and suited to the approximation of complex bifurcations in some dissipative systems. The observation that the most energetic modes involved in these low dimensional descriptions depend only weakly on the actual values of the problem parameters is firstly highlighted and used to develop a simple strategy to capture the transitions occurring over a given bifurcation parameter span. Flexibility of the approach is stressed by means of some numerical experiments. A significant improvement is obtained by introducing a truncation error estimate to detect when the approximation fails. Thus, the considered modes are suitably updated on demand, as the bifurcation parameter is varied, in order to account for possible changes in the phase space of the system that might be missed. A further extension of the method to more complex (quasi-periodic and chaotic) attractors is finally outlined by implementing a control of truncation instabilities, which leads to a general, adaptive reduced order model for the construction of bifurcation diagrams. Illustration of the ideas and methods in the complex Ginzburg–Landau equation (a paradigm of laminar flows on a bounded domain) evidences a fairly good computational efficiency. (paper)

  1. Efficient computation of bifurcation diagrams via adaptive ROMs

    Science.gov (United States)

    Terragni, F.; Vega, J. M.

    2014-08-01

    Various ideas concerning model reduction based on proper orthogonal decomposition are discussed, exploited, and suited to the approximation of complex bifurcations in some dissipative systems. The observation that the most energetic modes involved in these low dimensional descriptions depend only weakly on the actual values of the problem parameters is firstly highlighted and used to develop a simple strategy to capture the transitions occurring over a given bifurcation parameter span. Flexibility of the approach is stressed by means of some numerical experiments. A significant improvement is obtained by introducing a truncation error estimate to detect when the approximation fails. Thus, the considered modes are suitably updated on demand, as the bifurcation parameter is varied, in order to account for possible changes in the phase space of the system that might be missed. A further extension of the method to more complex (quasi-periodic and chaotic) attractors is finally outlined by implementing a control of truncation instabilities, which leads to a general, adaptive reduced order model for the construction of bifurcation diagrams. Illustration of the ideas and methods in the complex Ginzburg-Landau equation (a paradigm of laminar flows on a bounded domain) evidences a fairly good computational efficiency.

  2. EFFICIENT VM LOAD BALANCING ALGORITHM FOR A CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Jasmin James

    2012-09-01

    Full Text Available Cloud computing is a fast growing area in computing research and industry today. With the advancement of the Cloud, there are new possibilities opening up on how applications can be built and how different services can be offered to the end user through Virtualization, on the internet. There are the cloud service providers who provide large scaled computing infrastructure defined on usage, and provide the infrastructure services in a very flexiblemanner which the users can scale up or down at will. The establishment of an effective load balancing algorithm and how to use Cloud computing resources efficiently for effective and efficient cloud computing is one of the Cloud computing service providers’ultimate goals. In this paper firstly analysis of different Virtual Machine (VM load balancing algorithms is done. Secondly, a new VM load balancing algorithm has been proposed and implemented for an IaaS framework in Simulated cloud computing environment; i.e. ‘Weighted Active Monitoring Load Balancing Algorithm’ using CloudSimtools, for the Datacenter to effectively load balance requests between the available virtual machines assigning a weight, in order to achieve better performance parameters such as response time and Data processing time.

  3. Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation

    Science.gov (United States)

    Broadbent, Anne

    2016-08-01

    In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.

  4. Cloud computing approaches to accelerate drug discovery value chain.

    Science.gov (United States)

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  5. Computer Forensics Education - the Open Source Approach

    Science.gov (United States)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  6. An Efficient Method for Solving Spread Option Pricing Problem: Numerical Analysis and Computing

    Directory of Open Access Journals (Sweden)

    R. Company

    2016-01-01

    Full Text Available This paper deals with numerical analysis and computing of spread option pricing problem described by a two-spatial variables partial differential equation. Both European and American cases are treated. Taking advantage of a cross derivative removing technique, an explicit difference scheme is developed retaining the benefits of the one-dimensional finite difference method, preserving positivity, accuracy, and computational time efficiency. Numerical results illustrate the interest of the approach.

  7. Computational approaches to analogical reasoning current trends

    CERN Document Server

    Richard, Gilles

    2014-01-01

    Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...

  8. An Approach to Ad hoc Cloud Computing

    CERN Document Server

    Kirby, Graham; Macdonald, Angus; Fernandes, Alvaro

    2010-01-01

    We consider how underused computing resources within an enterprise may be harnessed to improve utilization and create an elastic computing infrastructure. Most current cloud provision involves a data center model, in which clusters of machines are dedicated to running cloud infrastructure software. We propose an additional model, the ad hoc cloud, in which infrastructure software is distributed over resources harvested from machines already in existence within an enterprise. In contrast to the data center cloud model, resource levels are not established a priori, nor are resources dedicated exclusively to the cloud while in use. A participating machine is not dedicated to the cloud, but has some other primary purpose such as running interactive processes for a particular user. We outline the major implementation challenges and one approach to tackling them.

  9. Interacting electrons theory and computational approaches

    CERN Document Server

    Martin, Richard M; Ceperley, David M

    2016-01-01

    Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. Many-body methods are becoming essential tools vital for quantitative calculations and understanding materials phenomena in physics, chemistry, materials science and other fields. This book provides a unified exposition of the most-used tools: many-body perturbation theory, dynamical mean field theory and quantum Monte Carlo simulations. Each topic is introduced with a less technical overview for a broad readership, followed by in-depth descriptions and mathematical formulation. Practical guidelines, illustrations and exercises are chosen to enable readers to appreciate the complementary approaches, their relationships, and the advantages and disadvantages of each method. This book is designed for graduate students and researchers who want to use and understand these advanced computational tools, get a broad overview, and acquire a basis for participating in new developments.

  10. Homothetic Efficiency and Test Power: A Non-Parametric Approach

    NARCIS (Netherlands)

    J. Heufer (Jan); P. Hjertstrand (Per)

    2015-01-01

    markdownabstract__Abstract__ We provide a nonparametric revealed preference approach to demand analysis based on homothetic efficiency. Homotheticity is a useful restriction but data rarely satisfies testable conditions. To overcome this we provide a way to estimate homothetic efficiency of

  11. Exploiting Self-organization in Bioengineered Systems: A Computational Approach.

    Science.gov (United States)

    Davis, Delin; Doloman, Anna; Podgorski, Gregory J; Vargis, Elizabeth; Flann, Nicholas S

    2017-01-01

    The productivity of bioengineered cell factories is limited by inefficiencies in nutrient delivery and waste and product removal. Current solution approaches explore changes in the physical configurations of the bioreactors. This work investigates the possibilities of exploiting self-organizing vascular networks to support producer cells within the factory. A computational model simulates de novo vascular development of endothelial-like cells and the resultant network functioning to deliver nutrients and extract product and waste from the cell culture. Microbial factories with vascular networks are evaluated for their scalability, robustness, and productivity compared to the cell factories without a vascular network. Initial studies demonstrate that at least an order of magnitude increase in production is possible, the system can be scaled up, and the self-organization of an efficient vascular network is robust. The work suggests that bioengineered multicellularity may offer efficiency improvements difficult to achieve with physical engineering approaches.

  12. Efficient Graph Based Approach to Large Scale Role Engineering

    Directory of Open Access Journals (Sweden)

    Dana Zhang

    2014-04-01

    Full Text Available Role engineering is the process of defining a set of roles that offer administrative benefit for Role Based Access Control (RBAC, which ensures data privacy. It is a business critical task that is required by enterprises wishing to migrate to RBAC. However, existing methods of role generation have not analysed what constitutes a beneficial role and as a result, often produce inadequate solutions in a time consuming manner. To address the urgent issue of identifying high quality RBAC structures in real enterprise environments, we present a cost based analysis of the problem for both flat and hierarchical RBAC structures. Specifically we propose two cost models to evaluate the administration cost of roles and provide a k-partite graph approach to role engineering. Existing role cost evaulations are approximations that overestimate the benefit of a role. Our method and cost models can provide exact role cost and show when existing role cost evaluations can be used as a lower bound to improve efficiency without effecting quality of results. In the first work to address role engineering using large scale real data sets, we propose RoleAnnealing, a fast solution space search algorithm with incremental computation and guided search space heuristics. Our experimental results on both real and synthetic data sets demonstrate that high quality RBAC configurations that maintain data privacy are identified efficiently by RoleAnnealing. Comparison with an existing approach shows RoleAnnealing is significantly faster and produces RBAC configurations with lower cost.

  13. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming

    2013-04-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  14. Approaching the truth in induction motor efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Haataja, J.; Pyrhoenen, J. [Lappeenranta Univ. of Technology (Finland)

    2000-07-01

    The importance of motor efficiency in the motor market is increasing. Therefore the accuracy of the efficiency value given by the manufacturer is significant. The accuracy of the efficiency determination depends on the test method used and the precision of the loss determination by the test method. Determining methods can be grouped into two broad categories: direct measurement methods and indirect methods (loss segregation methods). In the following the effect of different methods in the given motor efficiency has been investigated in more detail. While testing low voltage, polyphase squirrel cage induction motors IEC publication 34-2 allows both a direct and an indirect method to be used, but according to NEMA the IEEE-112 test method B - input-output - with loss segregation is to be used. European manufacturers prefer to use the ''summation of losses'' according to the IEC 34-2. When comparing IEC 34-2 to IEEE 112 it can be noticed that the latter one defines stabilised operating temperatures for the measurements, more accurate measuring instruments and more precise procedures to determine all losses than the IEC 34-2. The IEC 34-2 assumes an operating temperature, assumes a value for stray load loss and permits measurements at any motor operating condition with less accurate instruments. On the other hand the IEC 34-2 tolerances for motor losses are smaller than in the IEEE 112. However, it is important to note that IEEE states that besides the nominal efficiency value also a guaranteed minimum efficiency shall be shown on the nameplate. Depending on the manufacturing tolerances this minimum value can be better than the worst value that the tolerance allows. The test results shown below were measured for a high efficiency 690/400 V, four pole, general purpose 5.5 kW motor using both the IEEE 112 method B and the IEC 34-2 indirect (summation of losses) and direct methods. (orig.)

  15. An Efficient Approach for Identifying Stable Lobes with Discretization Method

    Directory of Open Access Journals (Sweden)

    Baohai Wu

    2013-01-01

    Full Text Available This paper presents a new approach for quick identification of chatter stability lobes with discretization method. Firstly, three different kinds of stability regions are defined: absolute stable region, valid region, and invalid region. Secondly, while identifying the chatter stability lobes, three different regions within the chatter stability lobes are identified with relatively large time intervals. Thirdly, stability boundary within the valid regions is finely calculated to get exact chatter stability lobes. The proposed method only needs to test a small portion of spindle speed and cutting depth set; about 89% computation time is savedcompared with full discretization method. It spends only about10 minutes to get exact chatter stability lobes. Since, based on discretization method, the proposed method can be used for different immersion cutting including low immersion cutting process, the proposed method can be directly implemented in the workshop to promote machining parameters selection efficiency.

  16. Genetic braid optimization: A heuristic approach to compute quasiparticle braids

    Science.gov (United States)

    McDonald, Ross B.; Katzgraber, Helmut G.

    2013-02-01

    In topologically protected quantum computation, quantum gates can be carried out by adiabatically braiding two-dimensional quasiparticles, reminiscent of entangled world lines. Bonesteel [Phys. Rev. Lett.10.1103/PhysRevLett.95.140503 95, 140503 (2005)], as well as Leijnse and Flensberg [Phys. Rev. B10.1103/PhysRevB.86.104511 86, 104511 (2012)], recently provided schemes for computing quantum gates from quasiparticle braids. Mathematically, the problem of executing a gate becomes that of finding a product of the generators (matrices) in that set that approximates the gate best, up to an error. To date, efficient methods to compute these gates only strive to optimize for accuracy. We explore the possibility of using a generic approach applicable to a variety of braiding problems based on evolutionary (genetic) algorithms. The method efficiently finds optimal braids while allowing the user to optimize for the relative utilities of accuracy and/or length. Furthermore, when optimizing for error only, the method can quickly produce efficient braids.

  17. A Computationally Efficient Aggregation Optimization Strategy of Model Predictive Control

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Model Predictive Control (MPC) is a popular technique and has been successfully used in various industrial applications. However, the big drawback of MPC involved in the formidable on-line computational effort limits its applicability to relatively slow and/or small processes with a moderate number of inputs. This paper develops an aggregation optimization strategy for MPC that can improve the computational efficiency of MPC. For the regulation problem, an input decaying aggregation optimization algorithm is presented by aggregating all the original optimized variables on control horizon with the decaying sequence in respect of the current control action.

  18. Universally Composable Efficient Multiparty Computation from Threshold Homomorphic Encryption

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Nielsen, Jesper Buus

    2003-01-01

    We present a new general multiparty computation protocol for the cryptographic scenario which is universally composable — in particular, it is secure against an active and adaptive adversary, corrupting any minority of the parties. The protocol is as efficient as the best known statically secure...... solutions, in particular the number of bits broadcast (which dominates the complexity) is Ω (nk |C|), where n is the number of parties, k is a security parameter, and |C| is the size of a circuit doing the desired computation. Unlike previous adaptively secure protocols for the cryptographic model, our...

  19. General approaches in ensemble quantum computing

    Indian Academy of Sciences (India)

    V Vimalan; N Chandrakumar

    2008-01-01

    We have developed methodology for NMR quantum computing focusing on enhancing the efficiency of initialization, of logic gate implementation and of readout. Our general strategy involves the application of rotating frame pulse sequences to prepare pseudopure states and to perform logic operations. We demonstrate experimentally our methodology for both homonuclear and heteronuclear spin ensembles. On model two-spin systems, the initialization time of one of our sequences is three-fourths (in the heteronuclear case) or one-fourth (in the homonuclear case), of the typical pulsed free precession sequences, attaining the same initialization efficiency. We have implemented the logical SWAP operation in homonuclear AMX spin systems using selective isotropic mixing, reducing the duration taken to a third compared to the standard re-focused INEPT-type sequence. We introduce the 1D version for readout of the rotating frame SWAP operation, in an attempt to reduce readout time. We further demonstrate the Hadamard mode of 1D SWAP, which offers 2N-fold reduction in experiment time for a system with -working bits, attaining the same sensitivity as the standard 1D version.

  20. Energy-efficient computing and networking. Revised selected papers

    Energy Technology Data Exchange (ETDEWEB)

    Hatziargyriou, Nikos; Dimeas, Aris [Ethnikon Metsovion Polytechneion, Athens (Greece); Weidlich, Anke (eds.) [SAP Research Center, Karlsruhe (Germany); Tomtsi, Thomai

    2011-07-01

    This book constitutes the postproceedings of the First International Conference on Energy-Efficient Computing and Networking, E-Energy, held in Passau, Germany in April 2010. The 23 revised papers presented were carefully reviewed and selected for inclusion in the post-proceedings. The papers are organized in topical sections on energy market and algorithms, ICT technology for the energy market, implementation of smart grid and smart home technology, microgrids and energy management, and energy efficiency through distributed energy management and buildings. (orig.)

  1. Computationally Efficient and Noise Robust DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...... bound. Experiments on real-life signals indicate the applicability of the methods in practical low local signal-to-noise ratios....

  2. Universally Composable Efficient Multiparty Computation from Threshold Homomorphic Encryption

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Nielsen, Jesper Buus

    2003-01-01

    We present a new general multiparty computation protocol for the cryptographic scenario which is universally composable — in particular, it is secure against an active and adaptive adversary, corrupting any minority of the parties. The protocol is as efficient as the best known statically secure ...... protocol does not use non-committing encryption, instead it is based on homomorphic threshold encryption, in particular the Paillier cryptosystem....

  3. Efficient MATLAB computations with sparse and factored tensors.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  4. Novel computational approaches characterizing knee physiotherapy

    Directory of Open Access Journals (Sweden)

    Wangdo Kim

    2014-01-01

    Full Text Available A knee joint’s longevity depends on the proper integration of structural components in an axial alignment. If just one of the components is abnormally off-axis, the biomechanical system fails, resulting in arthritis. The complexity of various failures in the knee joint has led orthopedic surgeons to select total knee replacement as a primary treatment. In many cases, this means sacrificing much of an otherwise normal joint. Here, we review novel computational approaches to describe knee physiotherapy by introducing a new dimension of foot loading to the knee axis alignment producing an improved functional status of the patient. New physiotherapeutic applications are then possible by aligning foot loading with the functional axis of the knee joint during the treatment of patients with osteoarthritis.

  5. Music Genre Classification Systems - A Computational Approach

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought...... that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular systems which use the raw audio signal as input to estimate the corresponding genre. This is in contrast...... to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...

  6. Nutrient Use Efficiency in Plants: Concepts and Approaches

    NARCIS (Netherlands)

    Hawkesford, M.J.; Kopriva, S.; De Kok, L.J.

    2014-01-01

    Nutrient Use Efficiency in Plants: Concepts and Approaches is the ninth volume in the Plant Ecophysiology series. It presents a broad overview of topics related to improvement of nutrient use efficiency of crops. Nutrient use efficiency (NUE) is a measure of how well plants use the available mineral

  7. A Highly Efficient Parallel Algorithm for Computing the Fiedler Vector

    CERN Document Server

    Manguoglu, Murat

    2010-01-01

    The eigenvector corresponding to the second smallest eigenvalue of the Laplacian of a graph, known as the Fiedler vector, has a number of applications in areas that include matrix reordering, graph partitioning, protein analysis, data mining, machine learning, and web search. The computation of the Fiedler vector has been regarded as an expensive process as it involves solving a large eigenvalue problem. We present a novel and efficient parallel algorithm for computing the Fiedler vector of large graphs based on the Trace Minimization algorithm (Sameh, et.al). We compare the parallel performance of our method with a multilevel scheme, designed specifically for computing the Fiedler vector, which is implemented in routine MC73\\_Fiedler of the Harwell Subroutine Library (HSL). In addition, we compare the quality of the Fiedler vector for the application of weighted matrix reordering and provide a metric for measuring the quality of reordering.

  8. A computational approach to negative priming

    Science.gov (United States)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  9. Costs evaluation methodic of energy efficient computer network reengineering

    Directory of Open Access Journals (Sweden)

    S.A. Nesterenko

    2016-09-01

    Full Text Available A key direction of modern computer networks reengineering is their transfer to a new energy-saving technology IEEE 802.3az. To make a reasoned decision about the transition to the new technology is needed a technique that allows network engineers to answer the question about the economic feasibility of a network upgrade. Aim: The aim of this research is development of methodic for calculating the cost-effectiveness of energy-efficient computer network reengineering. Materials and Methods: The methodic uses analytical models for calculating power consumption of a computer network port operating in IEEE 802.3 standard and energy-efficient mode of IEEE 802.3az standard. For frame transmission time calculation in the communication channel used the queuing model. To determine the values of the network operation parameters proposed to use multiagent network monitoring method. Results: The methodic allows calculating the economic impact of a computer network transfer to energy-saving technology IEEE 802.3az. To determine the network performance parameters proposed to use network SNMP monitoring systems based on RMON MIB agents.

  10. DEA efficiency analysis: A DEA approach with double frontiers

    Science.gov (United States)

    Azizi, Hossein

    2014-11-01

    Data envelopment analysis (DEA) is a method for measuring efficiency of peer decision-making units (DMUs). Conventional DEA evaluates the performance of each DMU using a set of most favourable weights. As a result, traditional DEA models can be considered methods for the analysis of the best relative efficiency or analysis of the optimistic efficiency. DEA efficient DMUs obtained from conventional DEA models create an efficient production frontier. Traditional DEA can be used to identify units with good performance in the most desirable scenarios. There is a similar approach that evaluates the performance indicators of each DMU using a set of most unfavourable weights. Accordingly, such models can be considered models for analysing the worst relative efficiency or pessimistic efficiency. This approach uses the inefficient production frontier for determining the worst relative efficiency that can be assigned to each DMU. DMUs lying on the inefficient production frontier are referred to as DEA inefficient while those neither on the efficient frontier nor on the inefficient frontier are declared DEA inefficient. It can be argued that both relative efficiencies should be considered simultaneously and any approach with only one of them would be biased. This paper proposed the integration of both efficiencies as an interval so that the overall performance score would belong to this interval. It was shown that efficiency interval provided more information than either of the two efficiencies, which was illustrated using two numerical examples.

  11. Co-operative Scheduled Energy Aware Load-Balancing technique for an Efficient Computational Cloud

    Directory of Open Access Journals (Sweden)

    T R V Anandharajan

    2011-03-01

    Full Text Available Cloud Computing in the recent years has been taking its evolution from the scientific to the non scientific and commercial applications. Power consumption and Load balancing are very important and complex problem in computational Cloud. A computational Cloud differs from traditional high-performance computing systems in the heterogeneity of the computing nodes, as well as the communication links that connect the different nodes together. Load Balancing is a very important component in the commodity services based cloud computing. There is a need to develop algorithms that can capture this complexity yet can be easily implemented and used to solve a wide range of load-balancing scenarios in a Data and Computing intensive applications. In this paper, we propose to find the best EFFICIENT cloud resource by Co-operative Power aware Scheduled Load Balancing solution to the Cloud load-balancing problem. The algorithm developed combines the inherent efficiency of the centralized approach, energy efficient and the fault-tolerant nature of the distributed environment like Cloud.

  12. Cell sorting using efficient light shaping approaches

    DEFF Research Database (Denmark)

    2016-01-01

    and light modulation devices. The Generalized Phase Contrast (GPC) method that can be used for efficiently illuminating spatial light modulators or creating well-defined contiguous optical traps is supplemented by diffractive techniques capable of integrating the available light and creating 2D or 3D beam...... distributions aimed at the positions of the detected cells. Furthermore, the beam shaping freedom provided by GPC can allow optimizations in the beam’s propagation and its interaction with the catapulted cells. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading...

  13. Blueprinting Approach in Support of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Willem-Jan van den Heuvel

    2012-03-01

    Full Text Available Current cloud service offerings, i.e., Software-as-a-service (SaaS, Platform-as-a-service (PaaS and Infrastructure-as-a-service (IaaS offerings are often provided as monolithic, one-size-fits-all solutions and give little or no room for customization. This limits the ability of Service-based Application (SBA developers to configure and syndicate offerings from multiple SaaS, PaaS, and IaaS providers to address their application requirements. Furthermore, combining different independent cloud services necessitates a uniform description format that facilitates the design, customization, and composition. Cloud Blueprinting is a novel approach that allows SBA developers to easily design, configure and deploy virtual SBA payloads on virtual machines and resource pools on the cloud. We propose the Blueprint concept as a uniform abstract description for cloud service offerings that may cross different cloud computing layers, i.e., SaaS, PaaS and IaaS. To support developers with the SBA design and development in the cloud, this paper introduces a formal Blueprint Template for unambiguously describing a blueprint, as well as a Blueprint Lifecycle that guides developers through the manipulation, composition and deployment of different blueprints for an SBA. Finally, the empirical evaluation of the blueprinting approach within an EC’s FP7 project is reported and an associated blueprint prototype implementation is presented.

  14. Energy Efficient Approach in RFID Network

    Science.gov (United States)

    Mahdin, Hairulnizam; Abawajy, Jemal; Salwani Yaacob, Siti

    2016-11-01

    Radio Frequency Identification (RFID) technology is among the key technology of Internet of Things (IOT). It is a sensor device that can monitor, identify, locate and tracking physical objects via its tag. The energy in RFID is commonly being used unwisely because they do repeated readings on the same tag as long it resides in the reader vicinity. Repeated readings are unnecessary because it only generate duplicate data that does not contain new information. The reading process need to be schedule accordingly to minimize the chances of repeated readings to save the energy. This will reduce operational cost and can prolong the tag's battery lifetime that cannot be replaced. In this paper, we propose an approach named SELECT to minimize energy spent during reading processes. Experiments conducted shows that proposed algorithm contribute towards significant energy savings in RFID compared to other approaches.

  15. A new approach in CHP steam turbines thermodynamic cycles computations

    Directory of Open Access Journals (Sweden)

    Grković Vojin R.

    2012-01-01

    Full Text Available This paper presents a new approach in mathematical modeling of thermodynamic cycles and electric power of utility district-heating and cogeneration steam turbines. The approach is based on the application of the dimensionless mass flows, which describe the thermodynamic cycle of a combined heat and power steam turbine. The mass flows are calculated relative to the mass flow to low pressure turbine. The procedure introduces the extraction mass flow load parameter νh which clearly indicates the energy transformation process, as well as the cogeneration turbine design features, but also its fitness for the electrical energy system requirements. The presented approach allows fast computations, as well as direct calculation of the selected energy efficiency indicators. The approach is exemplified with the calculation results of the district heat power to electric power ratio, as well as the cycle efficiency, versus νh. The influence of νh on the conformity of a combined heat and power turbine to the grid requirements is also analyzed and discussed. [Projekat Ministarstva nauke Republike Srbije, br. 33049: Development of CHP demonstration plant with gasification of biomass

  16. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  17. Energy efficient hybrid computing systems using spin devices

    Science.gov (United States)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  18. Efficient Adjoint Computation of Hybrid Systems of Differential Algebraic Equations with Applications in Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Abhyankar, Shrirang [Argonne National Lab. (ANL), Argonne, IL (United States); Anitescu, Mihai [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil [Argonne National Lab. (ANL), Argonne, IL (United States); Zhang, Hong [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-03-31

    Sensitivity analysis is an important tool to describe power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this work, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating trajectory sensitivities of larger systems and is consistent, within machine precision, with the function whose sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as DC exciters, by deriving and implementing the adjoint jump conditions that arise from state and time-dependent discontinuities. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach.

  19. An Efficient Numerical Approach for Nonlinear Fokker-Planck equations

    Science.gov (United States)

    Otten, Dustin; Vedula, Prakash

    2009-03-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.

  20. A New Stochastic Computing Methodology for Efficient Neural Network Implementation.

    Science.gov (United States)

    Canals, Vincent; Morro, Antoni; Oliver, Antoni; Alomar, Miquel L; Rosselló, Josep L

    2016-03-01

    This paper presents a new methodology for the hardware implementation of neural networks (NNs) based on probabilistic laws. The proposed encoding scheme circumvents the limitations of classical stochastic computing (based on unipolar or bipolar encoding) extending the representation range to any real number using the ratio of two bipolar-encoded pulsed signals. Furthermore, the novel approach presents practically a total noise-immunity capability due to its specific codification. We introduce different designs for building the fundamental blocks needed to implement NNs. The validity of the present approach is demonstrated through a regression and a pattern recognition task. The low cost of the methodology in terms of hardware, along with its capacity to implement complex mathematical functions (such as the hyperbolic tangent), allows its use for building highly reliable systems and parallel computing.

  1. Improving robustness and computational efficiency using modern C++

    Energy Technology Data Exchange (ETDEWEB)

    Paterno, M. [Fermilab; Kowalkowski, J. [Fermilab; Green, C. [Fermilab

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  2. An efficient and extensible approach for compressing phylogenetic trees

    KAUST Repository

    Matthews, Suzanne J

    2011-01-01

    Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference.Results: On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings.Conclusions: TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. © 2011 Matthews and Williams; licensee BioMed Central Ltd.

  3. Understanding Plant Nitrogen Metabolism through Metabolomics and Computational Approaches

    Directory of Open Access Journals (Sweden)

    Perrin H. Beatty

    2016-10-01

    Full Text Available A comprehensive understanding of plant metabolism could provide a direct mechanism for improving nitrogen use efficiency (NUE in crops. One of the major barriers to achieving this outcome is our poor understanding of the complex metabolic networks, physiological factors, and signaling mechanisms that affect NUE in agricultural settings. However, an exciting collection of computational and experimental approaches has begun to elucidate whole-plant nitrogen usage and provides an avenue for connecting nitrogen-related phenotypes to genes. Herein, we describe how metabolomics, computational models of metabolism, and flux balance analysis have been harnessed to advance our understanding of plant nitrogen metabolism. We introduce a model describing the complex flow of nitrogen through crops in a real-world agricultural setting and describe how experimental metabolomics data, such as isotope labeling rates and analyses of nutrient uptake, can be used to refine these models. In summary, the metabolomics/computational approach offers an exciting mechanism for understanding NUE that may ultimately lead to more effective crop management and engineered plants with higher yields.

  4. NaI(Tl) Detector Efficiency Computation Using Radioactive Parallelepiped Sources Based on Efficiency Transfer Principle

    OpenAIRE

    MOHAMED S. BADAWI; Mona M. Gouda; Ahmed M. El-Khatib; Thabet, Abouzeid A.; Salim, Ahmed A.; Mahmoud I. Abbas

    2015-01-01

    The efficiency transfer (ET) principle is considered as a simple numerical simulation method, which can be used to calculate the full-energy peak efficiency (FEPE) of 3″×3″ NaI(Tl) scintillation detector over a wide energy range. In this work, the calculations of FEPE are based on computing the effective solid angle ratio between a radioactive point and parallelepiped sources located at various distances from the detector surface. Besides, the attenuation of the photon by the source-to-detect...

  5. Experiences With Efficient Methodologies for Teaching Computer Programming to Geoscientists

    Science.gov (United States)

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-08-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students with little or no computing background, is well-known to be a difficult task. However, there is also a wealth of evidence-based teaching practices for teaching programming skills which can be applied to greatly improve learning outcomes and the student experience. Adopting these practices naturally gives rise to greater learning efficiency - this is critical if programming is to be integrated into an already busy geoscience curriculum. This paper considers an undergraduate computer programming course, run during the last 5 years in the Department of Earth Science and Engineering at Imperial College London. The teaching methodologies that were used each year are discussed alongside the challenges that were encountered, and how the methodologies affected student performance. Anonymised student marks and feedback are used to highlight this, and also how the adjustments made to the course eventually resulted in a highly effective learning environment.

  6. Efficient Capacity Computation and Power Optimization for Relay Networks

    CERN Document Server

    Parvaresh, Farzad

    2011-01-01

    The capacity or approximations to capacity of various single-source single-destination relay network models has been characterized in terms of the cut-set upper bound. In principle, a direct computation of this bound requires evaluating the cut capacity over exponentially many cuts. We show that the minimum cut capacity of a relay network under some special assumptions can be cast as a minimization of a submodular function, and as a result, can be computed efficiently. We use this result to show that the capacity, or an approximation to the capacity within a constant gap for the Gaussian, wireless erasure, and Avestimehr-Diggavi-Tse deterministic relay network models can be computed in polynomial time. We present some empirical results showing that computing constant-gap approximations to the capacity of Gaussian relay networks with around 300 nodes can be done in order of minutes. For Gaussian networks, cut-set capacities are also functions of the powers assigned to the nodes. We consider a family of power o...

  7. Efficient Computation of Distance Sketches in Distributed Networks

    CERN Document Server

    Sarma, Atish Das; Pandurangan, Gopal

    2011-01-01

    Distance computation is one of the most fundamental primitives used in communication networks. The cost of effectively and accurately computing pairwise network distances can become prohibitive in large-scale networks such as the Internet and Peer-to-Peer (P2P) networks. To negotiate the rising need for very efficient distance computation, approximation techniques for numerous variants of this question have recently received significant attention in the literature. The goal is to preprocess the graph and store a small amount of information such that whenever a query for any pairwise distance is issued, the distance can be well approximated (i.e., with small stretch) very quickly in an online fashion. Specifically, the pre-processing (usually) involves storing a small sketch with each node, such that at query time only the sketches of the concerned nodes need to be looked up to compute the approximate distance. In this paper, we present the first theoretical study of distance sketches derived from distance ora...

  8. Differential area profiles: decomposition properties and efficient computation.

    Science.gov (United States)

    Ouzounis, Georgios K; Pesaresi, Martino; Soille, Pierre

    2012-08-01

    Differential area profiles (DAPs) are point-based multiscale descriptors used in pattern analysis and image segmentation. They are defined through sets of size-based connected morphological filters that constitute a joint area opening top-hat and area closing bottom-hat scale-space of the input image. The work presented in this paper explores the properties of this image decomposition through sets of area zones. An area zone defines a single plane of the DAP vector field and contains all the peak components of the input image, whose size is between the zone's attribute extrema. Area zones can be computed efficiently from hierarchical image representation structures, in a way similar to regular attribute filters. Operations on the DAP vector field can then be computed without the need for exporting it first, and an example with the leveling-like convex/concave segmentation scheme is given. This is referred to as the one-pass method and it is demonstrated on the Max-Tree structure. Its computational performance is tested and compared against conventional means for computing differential profiles, relying on iterative application of area openings and closings. Applications making use of the area zone decomposition are demonstrated in problems related to remote sensing and medical image analysis.

  9. Increasing Computational Efficiency of Cochlear Models Using Boundary Layers

    Science.gov (United States)

    Alkhairy, Samiya A.; Shera, Christopher A.

    2016-01-01

    Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution

  10. Increasing computational efficiency of cochlear models using boundary layers

    Science.gov (United States)

    Alkhairy, Samiya A.; Shera, Christopher A.

    2015-12-01

    Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution

  11. Efficient Use of Preisach Hysteresis Model in Computer Aided Design

    Directory of Open Access Journals (Sweden)

    IONITA, V.

    2013-05-01

    Full Text Available The paper presents a practical detailed analysis regarding the use of the classical Preisach hysteresis model, covering all the steps, from measuring the necessary data for the model identification to the implementation in a software code for Computer Aided Design (CAD in Electrical Engineering. An efficient numerical method is proposed and the hysteresis modeling accuracy is tested on magnetic recording materials. The procedure includes the correction of the experimental data, which are used for the hysteresis model identification, taking into account the demagnetizing effect for the sample that is measured in an open-circuit device (a vibrating sample magnetometer.

  12. Efficient quantum algorithm for computing n-time correlation functions.

    Science.gov (United States)

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  13. Using Weighted Graphs for Computationally Efficient WLAN Location Determination

    DEFF Research Database (Denmark)

    Thomsen, Bent; Hansen, Rene

    2007-01-01

    Indoor location-based services hold promise for a multitude of valuable services, but require micro-detailed geo-referencing not achievable with "outdoor" technologies such as GPS and cellular networks. A widely used technique for accurate indoor positioning is location fingerprinting which makes...... burden for large buildings and is thus problematic for tracking users in real time on processor-constrained mobile devices. In this paper we present a technique for improving the computational efficiency of the fingerprinting technique such that location determination becomes tractable on a mobile device...

  14. Computationally efficient algorithm for Gaussian Process regression in case of structured samples

    Science.gov (United States)

    Belyaev, M.; Burnaev, E.; Kapushev, Y.

    2016-04-01

    Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.

  15. A semantic-web approach for modeling computing infrastructures

    NARCIS (Netherlands)

    M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat

    2013-01-01

    This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w

  16. Probabilistic Forecasting of Photovoltaic Generation: An Efficient Statistical Approach

    DEFF Research Database (Denmark)

    Wan, Can; Lin, Jin; Song, Yonghua;

    2016-01-01

    This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for PV...

  17. Computer Algebra meets Finite Elements: an Efficient Implementation for Maxwell's Equations

    CERN Document Server

    Koutschan, Christoph; Schoeberl, Joachim

    2011-01-01

    We consider the numerical discretization of the time-domain Maxwell's equations with an energy-conserving discontinuous Galerkin finite element formulation. This particular formulation allows for higher order approximations of the electric and magnetic field. Special emphasis is placed on an efficient implementation which is achieved by taking advantage of recurrence properties and the tensor-product structure of the chosen shape functions. These recurrences have been derived symbolically with computer algebra methods reminiscent of the holonomic systems approach.

  18. A novel neural dynamical approach to convex quadratic program and its efficient applications.

    Science.gov (United States)

    Xia, Youshen; Sun, Changyin

    2009-12-01

    This paper proposes a novel neural dynamical approach to a class of convex quadratic programming problems where the number of variables is larger than the number of equality constraints. The proposed continuous-time and proposed discrete-time neural dynamical approach are guaranteed to be globally convergent to an optimal solution. Moreover, the number of its neurons is equal to the number of equality constraints. In contrast, the number of neurons in existing neural dynamical methods is at least the number of the variables. Therefore, the proposed neural dynamical approach has a low computational complexity. Compared with conventional numerical optimization methods, the proposed discrete-time neural dynamical approach reduces multiplication operation per iteration and has a large computational step length. Computational examples and two efficient applications to signal processing and robot control further confirm the good performance of the proposed approach.

  19. Homothetic Efficiency and Test Power: A Non-Parametric Approach

    NARCIS (Netherlands)

    J. Heufer (Jan); P. Hjertstrand (Per)

    2015-01-01

    markdownabstract__Abstract__ We provide a nonparametric revealed preference approach to demand analysis based on homothetic efficiency. Homotheticity is a useful restriction but data rarely satisfies testable conditions. To overcome this we provide a way to estimate homothetic efficiency of consump

  20. Computationally efficient sub-band coding of ECG signals.

    Science.gov (United States)

    Husøy, J H; Gjerde, T

    1996-03-01

    A data compression technique is presented for the compression of discrete time electrocardiogram (ECG) signals. The compression system is based on sub-band coding, a technique traditionally used for compressing speech and images. The sub-band coder employs quadrature mirror filter banks (QMF) with up to 32 critically sampled sub-bands. Both finite impulse response (FIR) and the more computationally efficient infinite impulse response (IIR) filter banks are considered as candidates in a complete ECG coding system. The sub-bands are threshold, quantized using uniform quantizers and run-length coded. The output of the run-length coder is further compressed by a Huffman coder. Extensive simulations indicate that 16 sub-bands are a suitable choice for this application. Furthermore, IIR filter banks are preferable due to their superiority in terms of computational efficiency. We conclude that the present scheme, which is suitable for real time implementation on a PC, can provide compression ratios between 5 and 15 without loss of clinical information.

  1. A New Approach to Practical Active-Secure Two-Party Computation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio

    2012-01-01

    We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao’s garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...

  2. A New Approach to Practical Active-Secure Two-Party Computation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio

    2011-01-01

    We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao's garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce...

  3. Computationally efficient statistical differential equation modeling using homogenization

    Science.gov (United States)

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  4. An Efficient and Flexible Technical Approach to Develop and Deliver Online Peer Assessment

    NARCIS (Netherlands)

    Miao, Yongwu; Koper, Rob

    2006-01-01

    Miao, Y., & Koper, R. (2007). An Efficient and Flexible Technical Approach to Develop and Deliver Online Peer Assessment. In C. A. Chinn, G. Erkens & S. Puntambekar (Eds.), Proceedings of the 7th Computer Supported Collaborative Learning (CSCL 2007) conference 'Mice, Minds, and Society' (pp. 502-511

  5. A Two Layer Approach to the Computability and Complexity of Real Functions

    DEFF Research Database (Denmark)

    Lambov, Branimir Zdravkov

    2003-01-01

    We present a new model for computability and complexity of real functions together with an implementation that it based on it. The model uses a two-layer approach in which low-type basic objects perform the computation of a real function, but, whenever needed, can be complemented with higher type...... in computable analysis, while the efficiency of the implementation is not compromised by the need to create and maintain higher-type objects....

  6. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    Science.gov (United States)

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.

  7. Efficient parameter sensitivity computation for spatially extended reaction networks

    Science.gov (United States)

    Lester, C.; Yates, C. A.; Baker, R. E.

    2017-01-01

    Reaction-diffusion models are widely used to study spatially extended chemical reaction systems. In order to understand how the dynamics of a reaction-diffusion model are affected by changes in its input parameters, efficient methods for computing parametric sensitivities are required. In this work, we focus on the stochastic models of spatially extended chemical reaction systems that involve partitioning the computational domain into voxels. Parametric sensitivities are often calculated using Monte Carlo techniques that are typically computationally expensive; however, variance reduction techniques can decrease the number of Monte Carlo simulations required. By exploiting the characteristic dynamics of spatially extended reaction networks, we are able to adapt existing finite difference schemes to robustly estimate parametric sensitivities in a spatially extended network. We show that algorithmic performance depends on the dynamics of the given network and the choice of summary statistics. We then describe a hybrid technique that dynamically chooses the most appropriate simulation method for the network of interest. Our method is tested for functionality and accuracy in a range of different scenarios.

  8. The Efficient Use of Vector Computers with Emphasis on Computational Fluid Dynamics : a GAMM-Workshop

    CERN Document Server

    Gentzsch, Wolfgang

    1986-01-01

    The GAMM Committee for Numerical Methods in Fluid Mechanics organizes workshops which should bring together experts of a narrow field of computational fluid dynamics (CFD) to exchange ideas and experiences in order to speed-up the development in this field. In this sense it was suggested that a workshop should treat the solution of CFD problems on vector computers. Thus we organized a workshop with the title "The efficient use of vector computers with emphasis on computational fluid dynamics". The workshop took place at the Computing Centre of the University of Karlsruhe, March 13-15,1985. The participation had been restricted to 22 people of 7 countries. 18 papers have been presented. In the announcement of the workshop we wrote: "Fluid mechanics has actively stimulated the development of superfast vector computers like the CRAY's or CYBER 205. Now these computers on their turn stimulate the development of new algorithms which result in a high degree of vectorization (sca1ar/vectorized execution-time). But w...

  9. A computationally efficient method for hand-eye calibration.

    Science.gov (United States)

    Zhang, Zhiqiang; Zhang, Lin; Yang, Guang-Zhong

    2017-07-19

    Surgical robots with cooperative control and semiautonomous features have shown increasing clinical potential, particularly for repetitive tasks under imaging and vision guidance. Effective performance of an autonomous task requires accurate hand-eye calibration so that the transformation between the robot coordinate frame and the camera coordinates is well defined. In practice, due to changes in surgical instruments, online hand-eye calibration must be performed regularly. In order to ensure seamless execution of the surgical procedure without affecting the normal surgical workflow, it is important to derive fast and efficient hand-eye calibration methods. We present a computationally efficient iterative method for hand-eye calibration. In this method, dual quaternion is introduced to represent the rigid transformation, and a two-step iterative method is proposed to recover the real and dual parts of the dual quaternion simultaneously, and thus the estimation of rotation and translation of the transformation. The proposed method was applied to determine the rigid transformation between the stereo laparoscope and the robot manipulator. Promising experimental and simulation results have shown significant convergence speed improvement to 3 iterations from larger than 30 with regard to standard optimization method, which illustrates the effectiveness and efficiency of the proposed method.

  10. A mixed transform approach for efficient compression of medical images.

    Science.gov (United States)

    Ramaswamy, A; Mikhael, W B

    1996-01-01

    A novel technique is presented to compress medical data employing two or more mutually nonorthogonal transforms. Both lossy and lossless compression implementations are considered. The signal is first resolved into subsignals such that each subsignal is compactly represented in a particular transform domain. An efficient lossy representation of the signal is achieved by superimposing the dominant coefficients corresponding to each subsignal. The residual error, which is the difference between the original signal and the reconstructed signal is properly formulated. Adaptive algorithms in conjunction with an optimization strategy are developed to minimize this error. Both two-dimensional (2-D) and three-dimensional (3-D) approaches for the technique are developed. It is shown that for a given number of retained coefficients, the discrete cosine transform (DCT)-Walsh mixed transform representation yields a more compact representation than using DCT or Walsh alone. This lossy technique is further extended for the lossless case. The coefficients are quantized and the signal is reconstructed. The resulting reconstructed signal samples are rounded to the nearest integer and the modified residual error is computed. This error is transmitted employing a lossless technique such as the Huffman coding. It is shown that for a given number of retained coefficients, the mixed transforms again produces the smaller rms-modified residual error. The first-order entropy of the error is also smaller for the mixed-transforms technique than for the DCT, thus resulting in smaller length Huffman codes.

  11. Efficient implementation for spherical flux computation and its application to vascular segmentation.

    Science.gov (United States)

    Law, Max W K; Chung, Albert C S

    2009-03-01

    Spherical flux is the flux inside a spherical region, and it is very useful in the analysis of tubular structures in magnetic resonance angiography and computed tomographic angiography. The conventional approach is to estimate the spherical flux in the spatial domain. Its running time depends on the sphere radius quadratically, which leads to very slow spherical flux computation when the sphere size is large. This paper proposes a more efficient implementation for spherical flux computation in the Fourier domain. Our implementation is based on the reformulation of the spherical flux calculation using the divergence theorem, spherical step function, and the convolution operation. With this reformulation, most of the calculations are performed in the Fourier domain. We show how to select the frequency subband so that the computation accuracy can be maintained. It is experimentally demonstrated that, using the synthetic and clinical phase contrast magnetic resonance angiographic volumes, our implementation is more computationally efficient than the conventional spatial implementation. The accuracies of our implementation and that of the conventional spatial implementation are comparable. Finally, the proposed implementation can definitely benefit the computation of the multiscale spherical flux with a set of radii because, unlike the conventional spatial implementation, the time complexity of the proposed implementation does not depend on the sphere radius.

  12. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  13. Prediction of Protein Thermostability by an Efficient Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Jalal Rezaeenour

    2016-10-01

    Full Text Available Introduction: Manipulation of protein stability is important for understanding the principles that govern protein thermostability, both in basic research and industrial applications. Various data mining techniques exist for prediction of thermostable proteins. Furthermore, ANN methods have attracted significant attention for prediction of thermostability, because they constitute an appropriate approach to mapping the non-linear input-output relationships and massive parallel computing. Method: An Extreme Learning Machine (ELM was applied to estimate thermal behavior of 1289 proteins. In the proposed algorithm, the parameters of ELM were optimized using a Genetic Algorithm (GA, which tuned a set of input variables, hidden layer biases, and input weights, to and enhance the prediction performance. The method was executed on a set of amino acids, yielding a total of 613 protein features. A number of feature selection algorithms were used to build subsets of the features. A total of 1289 protein samples and 613 protein features were calculated from UniProt database to understand features contributing to the enzymes’ thermostability and find out the main features that influence this valuable characteristic. Results:At the primary structure level, Gln, Glu and polar were the features that mostly contributed to protein thermostability. At the secondary structure level, Helix_S, Coil, and charged_Coil were the most important features affecting protein thermostability. These results suggest that the thermostability of proteins is mainly associated with primary structural features of the protein. According to the results, the influence of primary structure on the thermostabilty of a protein was more important than that of the secondary structure. It is shown that prediction accuracy of ELM (mean square error can improve dramatically using GA with error rates RMSE=0.004 and MAPE=0.1003. Conclusion: The proposed approach for forecasting problem

  14. Step-by-step magic state encoding for efficient fault-tolerant quantum computation

    Science.gov (United States)

    Goto, Hayato

    2014-01-01

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387

  15. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    Science.gov (United States)

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  16. Human Computer Interaction: An intellectual approach

    Directory of Open Access Journals (Sweden)

    Kuntal Saroha

    2011-08-01

    Full Text Available This paper discusses the research that has been done in thefield of Human Computer Interaction (HCI relating tohuman psychology. Human-computer interaction (HCI isthe study of how people design, implement, and useinteractive computer systems and how computers affectindividuals, organizations, and society. This encompassesnot only ease of use but also new interaction techniques forsupporting user tasks, providing better access toinformation, and creating more powerful forms ofcommunication. It involves input and output devices andthe interaction techniques that use them; how information ispresented and requested; how the computer’s actions arecontrolled and monitored; all forms of help, documentation,and training; the tools used to design, build, test, andevaluate user interfaces; and the processes that developersfollow when creating Interfaces.

  17. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    Energy Technology Data Exchange (ETDEWEB)

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  18. Computational dynamics for robotics systems using a non-strict computational approach

    Science.gov (United States)

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  19. NaI(Tl Detector Efficiency Computation Using Radioactive Parallelepiped Sources Based on Efficiency Transfer Principle

    Directory of Open Access Journals (Sweden)

    Mohamed S. Badawi

    2015-01-01

    Full Text Available The efficiency transfer (ET principle is considered as a simple numerical simulation method, which can be used to calculate the full-energy peak efficiency (FEPE of 3″×3″ NaI(Tl scintillation detector over a wide energy range. In this work, the calculations of FEPE are based on computing the effective solid angle ratio between a radioactive point and parallelepiped sources located at various distances from the detector surface. Besides, the attenuation of the photon by the source-to-detector system (detector material, detector end cap, and holder material was considered and determined. This method is straightforwardly useful in setting up the efficiency calibration curve for NaI(Tl scintillation detector, when no calibration sources exist in volume shape. The values of the efficiency calculations using theoretical method are compared with the measured ones and the results show that the discrepancies in general for all the measurements are found to be less than 6%.

  20. An Augmented Incomplete Factorization Approach for Computing the Schur Complement in Stochastic Optimization

    KAUST Repository

    Petra, Cosmin G.

    2014-01-01

    We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.

  1. General and efficient parallel approach of finite element-boundary integral-multilevel fast multipole algorithm

    Institute of Scientific and Technical Information of China (English)

    Pan Xiaomin; Sheng Xinqing

    2008-01-01

    A general and efficient parallel approach is proposed for the first time to parallelize the hybrid finite-element-boundary-integral-multi-level fast multipole algorithm (FE-BI-MLFMA). Among many algorithms of FE-BI-MLFMA, the decomposition algorithm (DA) is chosen as a basis for the parallelization of FE-BI-MLFMA because of its distinct numerical characteristics suitable for parallelization. On the basis of the DA, the parallelization of FE-BI-MLFMA is carried out by employing the parallelized multi-frontal method for the matrix from the finite-element method and the parallelized MLFMA for the matrix from the boundary integral method respectively. The programming and numerical experiments of the proposed parallel approach are carried out in the high perfor-mance computing platform CEMS-Liuhui. Numerical experiments demonstrate that FE-BI-MLFMA is efficiently parallelized and its computational capacity is greatly improved without losing accuracy, efficiency, and generality.

  2. Reducing Vehicle Weight and Improving U.S. Energy Efficiency Using Integrated Computational Materials Engineering

    Science.gov (United States)

    Joost, William J.

    2012-09-01

    Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.

  3. Computational approaches for microalgal biofuel optimization: a review.

    Science.gov (United States)

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  4. Computational Approaches for Microalgal Biofuel Optimization: A Review

    Directory of Open Access Journals (Sweden)

    Joseph Koussa

    2014-01-01

    Full Text Available The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  5. Computational Recipe for Efficient Description of Large-Scale Conformational Changes in Biomolecular Systems.

    Science.gov (United States)

    Moradi, Mahmoud; Tajkhorshid, Emad

    2014-07-01

    Characterizing large-scale structural transitions in biomolecular systems poses major technical challenges to both experimental and computational approaches. On the computational side, efficient sampling of the configuration space along the transition pathway remains the most daunting challenge. Recognizing this issue, we introduce a knowledge-based computational approach toward describing large-scale conformational transitions using (i) nonequilibrium, driven simulations combined with work measurements and (ii) free energy calculations using empirically optimized biasing protocols. The first part is based on designing mechanistically relevant, system-specific reaction coordinates whose usefulness and applicability in inducing the transition of interest are examined using knowledge-based, qualitative assessments along with nonequilirbrium work measurements which provide an empirical framework for optimizing the biasing protocol. The second part employs the optimized biasing protocol resulting from the first part to initiate free energy calculations and characterize the transition quantitatively. Using a biasing protocol fine-tuned to a particular transition not only improves the accuracy of the resulting free energies but also speeds up the convergence. The efficiency of the sampling will be assessed by employing dimensionality reduction techniques to help detect possible flaws and provide potential improvements in the design of the biasing protocol. Structural transition of a membrane transporter will be used as an example to illustrate the workings of the proposed approach.

  6. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  7. Human brain mapping: Experimental and computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J. [Los Alamos National Lab., NM (US); Sanders, J. [Albuquerque VA Medical Center, NM (US); Belliveau, J. [Massachusetts General Hospital, Boston, MA (US)

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  8. Human brain mapping: Experimental and computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J. [Los Alamos National Lab., NM (US); Sanders, J. [Albuquerque VA Medical Center, NM (US); Belliveau, J. [Massachusetts General Hospital, Boston, MA (US)

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  9. Computational Models of Spreadsheet Development: Basis for Educational Approaches

    CERN Document Server

    Hodnigg, Karin; Mittermeir, Roland T

    2008-01-01

    Among the multiple causes of high error rates in spreadsheets, lack of proper training and of deep understanding of the computational model upon which spreadsheet computations rest might not be the least issue. The paper addresses this problem by presenting a didactical model focussing on cell interaction, thus exceeding the atomicity of cell computations. The approach is motivated by an investigation how different spreadsheet systems handle certain computational issues implied from moving cells, copy-paste operations, or recursion.

  10. Heterogeneous Computing in Economics: A Simplified Approach

    DEFF Research Database (Denmark)

    Dziubinski, Matt P.; Grassi, Stefano

    This paper shows the potential of heterogeneous computing in solving dynamic equilibrium models in economics. We illustrate the power and simplicity of the C++ Accelerated Massive Parallelism recently introduced by Microsoft. Starting from the same exercise as Aldrich et al. (2011) we document a ...

  11. Molecular electromagnetism a computational chemistry approach

    CERN Document Server

    Sauer, Stephan P A

    2011-01-01

    A textbook for a one-semester course for students in chemistry physics and nanotechnology, this book examines the interaction of molecules with electric and magnetic fields as, for example in light. The book provides the necessary background knowledge for simulating these interactions on computers with modern quantum chemical software.

  12. Computationally Efficient Characterization of Potential Energy Surfaces Based on Fingerprint Distances

    CERN Document Server

    Schaefer, Bastian

    2016-01-01

    An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows to understand important characteristics like thermodynamic, dynamic and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows to generate an approximate network of the minima, their connectivit...

  13. Efficient quantum computation in a network with probabilistic gates and logical encoding

    DEFF Research Database (Denmark)

    Borregaard, J.; Sørensen, A. S.; Cirac, J. I.

    2017-01-01

    An approach to efficient quantum computation with probabilistic gates is proposed and analyzed in both a local and nonlocal setting. It combines heralded gates previously studied for atom or atomlike qubits with logical encoding from linear optical quantum computation in order to perform high......-fidelity quantum gates across a quantum network. The error-detecting properties of the heralded operations ensure high fidelity while the encoding makes it possible to correct for failed attempts such that deterministic and high-quality gates can be achieved. Importantly, this is robust to photon loss, which...... is typically the main obstacle to photonic-based quantum information processing. Overall this approach opens a path toward quantum networks with atomic nodes and photonic links....

  14. Unified commutation-pruning technique for efficient computation of composite DFTs

    Science.gov (United States)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with

  15. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  16. Efficient Variational Approaches for Deformable Registration of Images

    Directory of Open Access Journals (Sweden)

    Mehmet Ali Akinlar

    2012-01-01

    Full Text Available Dirichlet, anisotropic, and Huber regularization terms are presented for efficient registration of deformable images. Image registration, an ill-posed optimization problem, is solved using a gradient-descent-based method and some fundamental theorems in calculus of variations. Euler-Lagrange equations with homogeneous Neumann boundary conditions are obtained. These equations are discretized by multigrid and finite difference numerical techniques. The method is applied to the registration of brain MR images of size 65×65. Computational results indicate that the presented method is quite fast and efficient in the registration of deformable medical images.

  17. Towards scalable quantum communication and computation: Novel approaches and realizations

    Science.gov (United States)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as

  18. A Unified Approach for Developing Efficient Algorithmic Programs

    Institute of Scientific and Technical Information of China (English)

    薛锦云

    1997-01-01

    A unified approach called partition-and-recur for developing efficient and correct algorithmic programs is presented.An algorithm(represented by recurrence and initiation)is separated from program,and special attention is paid to algorithm manipulation rather than proram calculus.An algorithm is exactly a set of mathematical formulae.It is easier for formal erivation and proof.After getting efficient and correct algorithm,a trivial transformation is used to get a final rogram,The approach covers several known algorithm design techniques,e.g.dynamic programming,greedy,divide-and-conquer and enumeration,etc.The techniques of partition and recurrence are not new.Partition is a general approach for dealing with complicated objects and is typically used in divide-and-conquer approach.Recurrence is used in algorithm analysis,in developing loop invariants and dynamic programming approach.The main contribution is combining two techniques used in typical algorithm development into a unified and systematic approach to develop general efficient algorithmic programs and presenting a new representation of algorithm that is easier for understanding and demonstrating the correctness and ingenuity of algorithmicprograms.

  19. Computational Approach To Understanding Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Włodzisław Duch

    2012-01-01

    Full Text Available Every year the prevalence of Autism Spectrum of Disorders (ASD is rising. Is there a unifying mechanism of various ASD cases at the genetic, molecular, cellular or systems level? The hypothesis advanced in this paper is focused on neural dysfunctions that lead to problems with attention in autistic people. Simulations of attractor neural networks performing cognitive functions help to assess system long-term neurodynamics. The Fuzzy Symbolic Dynamics (FSD technique is used for the visualization of attractors in the semantic layer of the neural model of reading. Large-scale simulations of brain structures characterized by a high order of complexity requires enormous computational power, especially if biologically motivated neuron models are used to investigate the influence of cellular structure dysfunctions on the network dynamics. Such simulations have to be implemented on computer clusters in a grid-based architectures

  20. Music Genre Classification Systems - A Computational Approach

    OpenAIRE

    Ahrendt, Peter; Hansen, Lars Kai

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular...

  1. Analog models of computations \\& Effective Church Turing Thesis: Efficient simulation of Turing machines by the General Purpose Analog Computer

    CERN Document Server

    Pouly, Amaury; Graça, Daniel S

    2012-01-01

    \\emph{Are analog models of computations more powerful than classical models of computations?} From a series of recent papers, it is now clear that many realistic analog models of computations are provably equivalent to classical digital models of computations from a \\emph{computability} point of view. Take, for example, the probably most realistic model of analog computation, the General Purpose Analog Computer (GPAC) model from Claude Shannon, a model for Differential Analyzers, which are analog machines used from 1930s to early 1960s to solve various problems. It is now known that functions computable by Turing machines are provably exactly those that are computable by GPAC. This paper is about next step: understanding if this equivalence also holds at the \\emph{complexity} level. In this paper we show that the realistic models of analog computation -- namely the General Purpose Analog Computer (GPAC) -- can simulate Turing machines in a computationally efficient manner. More concretely we show that, modulo...

  2. An Automatic Approach to Detect Software Anomalies in Cloud Computing Using Pragmatic Bayes Approach

    Directory of Open Access Journals (Sweden)

    Nethaji V

    2014-06-01

    Full Text Available Software detection of anomalies is a vital element of operations in data centers and service clouds. Statistical Process Control (SPC cloud charts sense routine anomalies and their root causes are identified based on the differential profiling strategy. By automating the tasks, most of the manual overhead incurred in detecting the software anomalies and the analysis time are reduced to a larger extent but detailed analysis of profiling data are not performed in most of the cases. On the other hand, the cloud scheduler judges both the requirements of the user and the available infrastructure to equivalent their requirements. OpenStack prototype works on cloud trust management which provides the scheduler but complexity occurs when hosting the cloud system. At the same time, Trusted Computing Base (TCB of a computing node does not achieve the scalability measure. This unique paradigm brings about many software anomalies, which have not been well studied. This work, a Pragmatic Bayes approach studies the problem of detecting software anomalies and ensures scalability by comparing information at the current time to historical data. In particular, PB approach uses the two component Gaussian mixture to deviations at current time in cloud environment. The introduction of Gaussian mixture in PB approach achieves higher scalability measure which involves supervising massive number of cells and fast enough to be potentially useful in many streaming scenarios. Wherein previous works has been ensured for scheduling often lacks of scalability, this paper shows the superiority of the method using a Bayes per section error rate procedure through simulation, and provides the detailed analysis of profiling data in the marginal distributions using the Amazon EC2 dataset. Extensive performance analysis shows that the PB approach is highly efficient in terms of runtime, scalability, software anomaly detection ratio, CPU utilization, density rate, and computational

  3. Computationally efficient finite element evaluation of natural patellofemoral mechanics.

    Science.gov (United States)

    Fitzpatrick, Clare K; Baldwin, Mark A; Rullkoetter, Paul J

    2010-12-01

    pressures averaged 8.3%, 11.2%, and 5.7% between rigid and deformable analyses in the tibiofemoral joint. As statistical, probabilistic, and optimization techniques can require hundreds to thousands of analyses, a viable platform is crucial to component evaluation or clinical applications. The computationally efficient rigid body platform described in this study may be integrated with statistical and probabilistic methods and has potential clinical application in understanding in vivo joint mechanics on a subject-specific or population basis.

  4. Co-evolutionary algorithm: An efficient approach for bilevel programming problems

    Science.gov (United States)

    Li, Hecheng; Fang, Lei

    2014-03-01

    The bilevel programming problem involves two optimization problems, which is hierarchical, strongly NP-hard and very challenging for most existing optimization approaches. An efficient universal co-evolutionary algorithm is developed in this article to deal with various bilevel programming problems. In the proposed algorithm, evolutionary algorithms are used to explore the leader's and the follower's decision-making spaces interactively. Unlike other existing approaches, in the suggested procedure the follower's problem is solved in two phases. First, an evolutionary algorithm is run for a few generations to obtain an approximation of lower level solutions. In the second phase, from all approximate solutions obtained above, only a small number of good points are selected and evolved again by a newly designed multi-criteria evolutionary algorithm. The technique refines some candidate solutions and can efficiently reduce the computational cost of obtaining feasible solutions. Proof-of-principle experiments demonstrate the efficiency of the proposed approach.

  5. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  6. Measuring energy efficiency in economics: Shadow value approach

    Science.gov (United States)

    Khademvatani, Asgar

    For decades, academic scholars and policy makers have commonly applied a simple average measure, energy intensity, for studying energy efficiency. In contrast, we introduce a distinctive marginal measure called energy shadow value (SV) for modeling energy efficiency drawn on economic theory. This thesis demonstrates energy SV advantages, conceptually and empirically, over the average measure recognizing marginal technical energy efficiency and unveiling allocative energy efficiency (energy SV to energy price). Using a dual profit function, the study illustrates how treating energy as quasi-fixed factor called quasi-fixed approach offers modeling advantages and is appropriate in developing an explicit model for energy efficiency. We address fallacies and misleading results using average measure and demonstrate energy SV advantage in inter- and intra-country energy efficiency comparison. Energy efficiency dynamics and determination of efficient allocation of energy use are shown through factors impacting energy SV: capital, technology, and environmental obligations. To validate the energy SV, we applied a dual restricted cost model using KLEM dataset for the 35 US sectors stretching from 1958 to 2000 and selected a sample of the four sectors. Following the empirical results, predicted wedges between energy price and the SV growth indicate a misallocation of energy use in stone, clay and glass (SCG) and communications (Com) sectors with more evidence in the SCG compared to the Com sector, showing overshoot in energy use relative to optimal paths and cost increases from sub-optimal energy use. The results show that energy productivity is a measure of technical efficiency and is void of information on the economic efficiency of energy use. Decomposing energy SV reveals that energy, capital and technology played key roles in energy SV increases helping to consider and analyze policy implications of energy efficiency improvement. Applying the marginal measure, we also

  7. Efficient wave-function matching approach for quantum transport calculations

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg; Hansen, Per Christian; Petersen, Dan Erik;

    2009-01-01

    The wave-function matching (WFM) technique has recently been developed for the calculation of electronic transport in quantum two-probe systems. In terms of efficiency it is comparable to the widely used Green's function approach. The WFM formalism presented so far requires the evaluation of all ...

  8. Productivity growth and efficiency measurement : a dual approach

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.

    2000-01-01

    This paper derives output technical efficiency from a dual system of input demand and output supply equations using the concept of virtual prices. The underlying production function is firm-specific through intercept terms and slope parameters. The approach is used to decompose total factor

  9. Productivity growth and efficiency measurement : a dual approach

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.

    2000-01-01

    This paper derives output technical efficiency from a dual system of input demand and output supply equations using the concept of virtual prices. The underlying production function is firm-specific through intercept terms and slope parameters. The approach is used to decompose total factor producti

  10. Discovering the Network Topology: An Efficient Approach for SDN

    Directory of Open Access Journals (Sweden)

    Leonardo OCHOA-ADAY

    2016-11-01

    Full Text Available Network topology is a physical description of the overall resources in the network. Collecting this information using efficient mechanisms becomes a critical task for important network functions such as routing, network management, quality of service (QoS, among many others. Recent technologies like Software-Defined Networks (SDN have emerged as promising approaches for managing the next generation networks. In order to ensure a proficient topology discovery service in SDN, we propose a simple agents-based mechanism. This mechanism improves the overall efficiency of the topology discovery process. In this paper, an algorithm for a novel Topology Discovery Protocol (SD-TDP is described. This protocol will be implemented in each switch through a software agent. Thus, this approach will provide a distributed solution to solve the problem of network topology discovery in a more simple and efficient way.

  11. Efficient Parallel Computation of Nearest Neighbor Interchange Distances

    CERN Document Server

    Gast, Mikael

    2012-01-01

    The nni-distance is a well-known distance measure for phylogenetic trees. We construct an efficient parallel approximation algorithm for the nni-distance in the CRCW-PRAM model running in O(log n) time on O(n) processors. Given two phylogenetic trees T1 and T2 on the same set of taxa and with the same multi-set of edge-weights, the algorithm constructs a sequence of nni-operations of weight at most O(log n) \\cdot opt, where opt denotes the minimum weight of a sequence of nni-operations transforming T1 into T2 . This algorithm is based on the sequential approximation algorithm for the nni-distance given by DasGupta et al. (2000). Furthermore, we show that the problem of identifying so called good edge-pairs between two weighted phylogenies can be computed in O(log n) time on O(n log n) processors.

  12. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  13. Acoustic gravity waves: A computational approach

    Science.gov (United States)

    Hariharan, S. I.; Dutt, P. K.

    1987-01-01

    This paper discusses numerical solutions of a hyperbolic initial boundary value problem that arises from acoustic wave propagation in the atmosphere. Field equations are derived from the atmospheric fluid flow governed by the Euler equations. The resulting original problem is nonlinear. A first order linearized version of the problem is used for computational purposes. The main difficulty in the problem as with any open boundary problem is in obtaining stable boundary conditions. Approximate boundary conditions are derived and shown to be stable. Numerical results are presented to verify the effectiveness of these boundary conditions.

  14. Energy Efficient Security Preserving VM Live Migration In Data Centers For Cloud Computing

    Directory of Open Access Journals (Sweden)

    Korir Sammy

    2012-03-01

    Full Text Available Virtualization is an innovation that has widely been utilized in modern data centers for cloud computing to realize energy-efficient operations of servers. Virtual machine (VM migration brings multiple benefits such as resource distribution and energy aware consolidation. Server consolidation achieves energy efficiency by enabling multiple instances of operating systems to run simultaneously on a single machine. With virtualization, it is possible to consolidate severs through VM live migration. However, migration of virtual machines brings extra energy consumption and serious security concerns that derail full adoption of this technology. In this paper, we propose a secure energy-aware provisioning of cloud computing resources on consolidated and virtualized platforms. Energy efficiency is achieved through just-right dynamic Round-Robin provisioning mechanism and the ability to power down sub-systems of a host system that are not required by VMs mapped to it. We further propose solutions to security challenges faced during VM live migration. We validate our approach by conducting a set of rigorous performance evaluation study using CloudSim toolkit. The experimental results show that our approach achieves reduced energy consumption in data centers while not compromising on security.

  15. Determinants of Health Spending Efficiency: a Tobit Panel Data Approach Based on DEA Efficiency Scores

    Directory of Open Access Journals (Sweden)

    Douanla Tayo Lionel

    2015-08-01

    Full Text Available This study aims at identifying the determinants of health expenditure efficiency over the period 2005-2011 using a Tobit Panel Data Approach based on DEA Efficiency Scores. The study was made on 150 countries, where we had 45 high income countries, 40 upper middle income countries, 36 lower middle income countries and 29 low income countries. The estimated results show that Carbon dioxide emission, gross domestic product per capita, improvement in corruption, the age composition of the population, population density and government effectiveness are significant determinants of health expenditure efficiency. Thus, low income countries should promote green growth and all the income groups should intensively fight against poverty.

  16. NEW APPROACHES TO EFFICIENCY OF MASSIVE ONLINE COURSE

    Directory of Open Access Journals (Sweden)

    Liubov S. Lysitsina

    2014-09-01

    Full Text Available This paper is focused on efficiency of e-learning, in general, and massive online course in programming and information technology, in particular. Several innovative approaches and scenarios have been proposed, developed, implemented and verified by the authors, including 1 a new approach to organize and use automatic immediate feedback that significantly helps a learner to verify developed code and increases an efficiency of learning, 2 a new approach to construct learning interfaces – it is based on “develop a code – get a result – validate a code” technique, 3 three scenarios of visualization and verification of developed code, 4 a new multi-stage approach to solve complex programming assignments, 5 a new implementation of “perfectionism” game mechanics in a massive online course. Overall, due to implementation of proposed and developed approaches, the efficiency of massive online course has been considerably increased, particularly 1 the additional 27.9 % of students were able to complete successfully “Web design and development using HTML5 and CSS3” massive online course at ITMO University, and 2 based on feedback from 5588 students a “perfectionism” game mechanics noticeably improves students’ involvement into course activities and retention factor.

  17. Assessing farming eco-efficiency: a Data Envelopment Analysis approach.

    Science.gov (United States)

    Picazo-Tadeo, Andrés J; Gómez-Limón, José A; Reig-Martínez, Ernest

    2011-04-01

    This paper assesses farming eco-efficiency using Data Envelopment Analysis (DEA) techniques. Eco-efficiency scores at both farm and environmental pressure-specific levels are computed for a sample of Spanish farmers operating in the rain-fed agricultural system of Campos County. The determinants of eco-efficiency are then studied using truncated regression and bootstrapping techniques. We contribute to previous literature in this field of research by including information on slacks in the assessment of the potential environmental pressure reductions in a DEA framework. Our results reveal that farmers are quite eco-inefficient, with very few differences emerging among specific environmental pressures. Moreover, eco-inefficiency is closely related to technical inefficiencies in the management of inputs. Regarding the determinants of eco-efficiency, farmers benefiting from agri-environmental programs as well as those with university education are found to be more eco-efficient. Concerning the policy implications of these results, public expenditure in agricultural extension and farmer training could be of some help to promote integration between farming and the environment. Furthermore, Common Agricultural Policy agri-environmental programs are an effective policy to improve eco-efficiency, although some doubts arise regarding their cost-benefit balance.

  18. Global computational algebraic topology approach for diffusion

    Science.gov (United States)

    Auclair-Fortier, Marie-Flavie; Ziou, Djemel; Allili, Madjid

    2004-05-01

    One physical process involved in many computer vision problems is the heat diffusion process. Such Partial differential equations are continuous and have to be discretized by some techniques, mostly mathematical processes like finite differences or finite elements. The continuous domain is subdivided into sub-domains in which there is only one value. The diffusion equation comes from the energy conservation then it is valid on a whole domain. We use the global equation instead of discretize the PDE obtained by a limit process on this global equation. To encode these physical global values over pixels of different dimensions, we use a computational algebraic topology (CAT)-based image model. This model has been proposed by Ziou and Allili and used for the deformation of curves and optical flow. It introduces the image support as a decomposition in terms of points, edges, surfaces, volumes, etc. Images of any dimensions can then be handled. After decomposing the physical principles of the heat transfer into basic laws, we recall the CAT-based image model and use it to encode the basic laws. We then present experimental results for nonlinear graylevel diffusion for denoising, ensuring thin features preservation.

  19. A complex network approach to cloud computing

    CERN Document Server

    Travieso, Gonzalo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2015-01-01

    Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the users' tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlain by Erdos-Renyi and Barabasi-Albert topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of two indices: the cost of communication between the user and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter index, the ER topology provides better performance than the BA case for smaller average degrees and opposite behavior for larger average degrees. With respect to the cost, smaller values are found in the BA ...

  20. Computational approaches to homogeneous gold catalysis.

    Science.gov (United States)

    Faza, Olalla Nieto; López, Carlos Silva

    2015-01-01

    Homogenous gold catalysis has been exploding for the last decade at an outstanding pace. The best described reactivity of Au(I) and Au(III) species is based on gold's properties as a soft Lewis acid, but new reactivity patterns have recently emerged which further expand the range of transformations achievable using gold catalysis, with examples of dual gold activation, hydrogenation reactions, or Au(I)/Au(III) catalytic cycles.In this scenario, to develop fully all these new possibilities, the use of computational tools to understand at an atomistic level of detail the complete role of gold as a catalyst is unavoidable. In this work we aim to provide a comprehensive review of the available benchmark works on methodological options to study homogenous gold catalysis in the hope that this effort can help guide the choice of method in future mechanistic studies involving gold complexes. This is relevant because a representative number of current mechanistic studies still use methods which have been reported as inappropriate and dangerously inaccurate for this chemistry.Together with this, we describe a number of recent mechanistic studies where computational chemistry has provided relevant insights into non-conventional reaction paths, unexpected selectivities or novel reactivity, which illustrate the complexity behind gold-mediated organic chemistry.

  1. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    Science.gov (United States)

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  2. Efficiency of supply chain management. Strategic and operational approach

    Directory of Open Access Journals (Sweden)

    Grzegorz Lichocik

    2013-06-01

    Full Text Available Background: One of the most important issues subject to theoretical considerations and empirical studies is the measurement of efficiency of activities in logistics and supply chain management. Simultaneously, efficiency is one of the terms interpreted in an ambiguous and multi-aspect manner, depending on the subject of a study. The multitude of analytical dimensions of this term results in the fact that, apart from economic efficiency being the basic study area, other dimensions perceived as an added value by different groups of supply chain participants become more and more important. Methods: The objective of this paper is to attempt to explain the problem of supply chain management efficiency in the context of general theoretical considerations relating to supply chain management. The authors have also highlighted determinants and practical implications of supply chain management efficiency in strategic and operational contexts. The study employs critical analyses of logistics literature and the free-form interview with top management representatives of a company operating in the TSL sector. Results: We must find a comprehensive approach to supply chain efficiency including all analytical dimensions connected with real goods and services flow. An effective supply chain must be cost-effective (ensuring economic efficiency of a chain, functional (reducing processes, lean, minimising the number of links in the chain to the necessary ones, adapting supply chain participants' internal processes to a common objective based on its efficiency and ensuring high quality of services (customer-oriented logistics systems. Conclusions: Efficiency of supply chains is not only a task for which a logistics department is responsible as it is a strategic decision taken by the management as regards the method of future company's operation. Correctly planned and fulfilled logistics tasks may result in improving performance of a company as well as the whole

  3. Q-P Wave traveltime computation by an iterative approach

    KAUST Repository

    Ma, Xuxin

    2013-01-01

    In this work, we present a new approach to compute anisotropic traveltime based on solving successively elliptical isotropic traveltimes. The method shows good accuracy and is very simple to implement.

  4. Efficient coupling integrals computation of waveguide step discontinuities using BI-RME and Nystrom methods

    Science.gov (United States)

    Taroncher, Mariam; Vidal-Pantaleoni, Ana; Boria, Vicente E.; Marini, Stephan; Soto, Pablo; Cogollos, Santiago

    2004-04-01

    This paper describes a novel technique for the very efficient and accurate commputation of the coupling integrals of waveguide step discontinuities between arbitrary cross section waveguides. This new technique relies on solving the Integral Equation (IE) that provides the well-known Boundary Integral -- Resonant Mode Expansion (Bi-RME) method by the Nystrom approach, instead of using the traditional Galerkin version of the Method of Moments (MoM), thus providing large savings on computational costs. Comparative benchmarks between the results provided by the new technique and the original BI-RME method are successfully presented.

  5. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    Science.gov (United States)

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  6. The fundamentals of computational intelligence system approach

    CERN Document Server

    Zgurovsky, Mikhail Z

    2017-01-01

    This monograph is dedicated to the systematic presentation of main trends, technologies and methods of computational intelligence (CI). The book pays big attention to novel important CI technology- fuzzy logic (FL) systems and fuzzy neural networks (FNN). Different FNN including new class of FNN- cascade neo-fuzzy neural networks are considered and their training algorithms are described and analyzed. The applications of FNN to the forecast in macroeconomics and at stock markets are examined. The book presents the problem of portfolio optimization under uncertainty, the novel theory of fuzzy portfolio optimization free of drawbacks of classical model of Markovitz as well as an application for portfolios optimization at Ukrainian, Russian and American stock exchanges. The book also presents the problem of corporations bankruptcy risk forecasting under incomplete and fuzzy information, as well as new methods based on fuzzy sets theory and fuzzy neural networks and results of their application for bankruptcy ris...

  7. A polyhedral approach to computing border bases

    CERN Document Server

    Braun, Gábor

    2009-01-01

    Border bases can be considered to be the natural extension of Gr\\"obner bases that have several advantages. Unfortunately, to date the classical border basis algorithm relies on (degree-compatible) term orderings and implicitly on reduced Gr\\"obner bases. We adapt the classical border basis algorithm to allow for calculating border bases for arbitrary degree-compatible order ideals, which is \\emph{independent} from term orderings. Moreover, the algorithm also supports calculating degree-compatible order ideals with \\emph{preference} on contained elements, even though finding a preferred order ideal is NP-hard. Effectively we retain degree-compatibility only to successively extend our computation degree-by-degree. The adaptation is based on our polyhedral characterization: order ideals that support a border basis correspond one-to-one to integral points of the order ideal polytope. This establishes a crucial connection between the ideal and the combinatorial structure of the associated factor spaces.

  8. Biologically motivated computationally intensive approaches to image pattern recognition

    NARCIS (Netherlands)

    Petkov, Nikolay

    1995-01-01

    This paper presents some of the research activities of the research group in vision as a grand challenge problem whose solution is estimated to need the power of Tflop/s computers and for which computational methods have yet to be developed. The concerned approaches are biologically motivated, in th

  9. An Approach to Dynamic Provisioning of Social and Computational Services

    NARCIS (Netherlands)

    Bonino da Silva Santos, Luiz Olavo; Sorathia, Vikram; Ferreira Pires, Luis; Sinderen, van Marten

    2010-01-01

    Service-Oriented Computing (SOC) builds upon the intuitive notion of service already known and used in our society for a long time. SOC-related approaches are based on computer-executable functional units that often represent automation of services that exist at the social level, i.e., services at t

  10. COMPUTER APPLICATION SYSTEM FOR OPERATIONAL EFFICIENCY OF DIESEL RAILBUSES

    Directory of Open Access Journals (Sweden)

    Łukasz WOJCIECHOWSKI

    2016-09-01

    Full Text Available The article presents a computer algorithm to calculate the estimated operating cost analysis rail bus. This computer application system compares the cost of employment locomotive and wagon, the cost of using locomotives and cost of using rail bus. An intensive growth of passenger railway traffic increased a demand for modern computer systems to management means of transportation. Described computer application operates on the basis of selected operating parameters of rail buses.

  11. Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation

    OpenAIRE

    Broadbent, Anne

    2015-01-01

    In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the pa...

  12. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    Science.gov (United States)

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  13. Efficient volume preserving approach for skeleton-based implicit surfaces

    Institute of Scientific and Technical Information of China (English)

    史红兵; 童若锋; 董金祥

    2003-01-01

    This paper presents an efficient way to preserve the volume of implicit surfaces generated by skeletons. Recursive subdivision is used to efficiently calculate the volume. The criterion for subdivision is obtained by using the property of density functions and treating different types of skeletons respectively to get accurate minimum and maximum distances from a cube to a skeleton. Compared with the criterion generated by other ways such as using traditional Interval Analysis, Affine Arithmetic, or Lipschitz condition, our approach is much better both in speed and accuracy.

  14. Stochastic pumping of heat: approaching the Carnot efficiency.

    Science.gov (United States)

    Segal, Dvira

    2008-12-31

    Random noise can generate a unidirectional heat current across asymmetric nano-objects in the absence (or against) a temperature gradient. We present a minimal model for a molecular-level stochastic heat pump that may operate arbitrarily close to the Carnot efficiency. The model consists a fluctuating molecular unit coupled to two solids characterized by distinct phonon spectral properties. Heat pumping persists for a broad range of system and bath parameters. Furthermore, by filtering the reservoirs' phonons the pump efficiency can approach the Carnot limit.

  15. An Efficient Soft Set-Based Approach for Conflict Analysis.

    Science.gov (United States)

    Sutoyo, Edi; Mungad, Mungad; Hamid, Suraya; Herawan, Tutut

    2016-01-01

    Conflict analysis has been used as an important tool in economic, business, governmental and political dispute, games, management negotiations, military operations and etc. There are many mathematical formal models have been proposed to handle conflict situations and one of the most popular is rough set theory. With the ability to handle vagueness from the conflict data set, rough set theory has been successfully used. However, computational time is still an issue when determining the certainty, coverage, and strength of conflict situations. In this paper, we present an alternative approach to handle conflict situations, based on some ideas using soft set theory. The novelty of the proposed approach is that, unlike in rough set theory that uses decision rules, it is based on the concept of co-occurrence of parameters in soft set theory. We illustrate the proposed approach by means of a tutorial example of voting analysis in conflict situations. Furthermore, we elaborate the proposed approach on real world dataset of political conflict in Indonesian Parliament. We show that, the proposed approach achieves lower computational time as compared to rough set theory of up to 3.9%.

  16. An Efficient Soft Set-Based Approach for Conflict Analysis.

    Directory of Open Access Journals (Sweden)

    Edi Sutoyo

    Full Text Available Conflict analysis has been used as an important tool in economic, business, governmental and political dispute, games, management negotiations, military operations and etc. There are many mathematical formal models have been proposed to handle conflict situations and one of the most popular is rough set theory. With the ability to handle vagueness from the conflict data set, rough set theory has been successfully used. However, computational time is still an issue when determining the certainty, coverage, and strength of conflict situations. In this paper, we present an alternative approach to handle conflict situations, based on some ideas using soft set theory. The novelty of the proposed approach is that, unlike in rough set theory that uses decision rules, it is based on the concept of co-occurrence of parameters in soft set theory. We illustrate the proposed approach by means of a tutorial example of voting analysis in conflict situations. Furthermore, we elaborate the proposed approach on real world dataset of political conflict in Indonesian Parliament. We show that, the proposed approach achieves lower computational time as compared to rough set theory of up to 3.9%.

  17. An Efficient Soft Set-Based Approach for Conflict Analysis

    Science.gov (United States)

    Sutoyo, Edi; Mungad, Mungad; Hamid, Suraya; Herawan, Tutut

    2016-01-01

    Conflict analysis has been used as an important tool in economic, business, governmental and political dispute, games, management negotiations, military operations and etc. There are many mathematical formal models have been proposed to handle conflict situations and one of the most popular is rough set theory. With the ability to handle vagueness from the conflict data set, rough set theory has been successfully used. However, computational time is still an issue when determining the certainty, coverage, and strength of conflict situations. In this paper, we present an alternative approach to handle conflict situations, based on some ideas using soft set theory. The novelty of the proposed approach is that, unlike in rough set theory that uses decision rules, it is based on the concept of co-occurrence of parameters in soft set theory. We illustrate the proposed approach by means of a tutorial example of voting analysis in conflict situations. Furthermore, we elaborate the proposed approach on real world dataset of political conflict in Indonesian Parliament. We show that, the proposed approach achieves lower computational time as compared to rough set theory of up to 3.9%. PMID:26928627

  18. Delay Computation Using Fuzzy Logic Approach

    Directory of Open Access Journals (Sweden)

    Ramasesh G. R.

    2012-10-01

    Full Text Available The paper presents practical application of fuzzy sets and system theory in predicting delay, with reasonable accuracy, a wide range of factors pertaining to construction projects. In this paper we shall use fuzzy logic to predict delays on account of Delayed supplies and Labor shortage. It is observed that the project scheduling software use either deterministic method or probabilistic method for computation of schedule durations, delays, lags and other parameters. In other words, these methods use only quantitative inputs leaving-out the qualitative aspects associated with individual activity of work. The qualitative aspect viz., the expertise of the mason or the lack of experience can have a significant impact on the assessed duration. Such qualitative aspects do not find adequate representation in the Project Scheduling software. A realistic project is considered for which a PERT chart has been prepared using showing all the major activities in reasonable detail. This project has been periodically updated until its completion. It is observed that some of the activities are delayed due to extraneous factors resulting in the overall delay of the project. The software has the capability to calculate the overall delay through CPM (Critical Path Method when each of the activity-delays is reported. We shall now demonstrate that by using fuzzy logic, these delays could have been predicted well in advance.

  19. On a multiscale approach for filter efficiency simulations

    KAUST Repository

    Iliev, Oleg P.

    2014-07-01

    Filtration in general, and the dead end depth filtration of solid particles out of fluid in particular, is intrinsic multiscale problem. The deposition (capturing of particles) essentially depends on local velocity, on microgeometry (pore scale geometry) of the filtering medium and on the diameter distribution of the particles. The deposited (captured) particles change the microstructure of the porous media what leads to change of permeability. The changed permeability directly influences the velocity field and pressure distribution inside the filter element. To close the loop, we mention that the velocity influences the transport and deposition of particles. In certain cases one can evaluate the filtration efficiency considering only microscale or only macroscale models, but in general an accurate prediction of the filtration efficiency requires multiscale models and algorithms. This paper discusses the single scale and the multiscale models, and presents a fractional time step discretization algorithm for the multiscale problem. The velocity within the filter element is computed at macroscale, and is used as input for the solution of microscale problems at selected locations of the porous medium. The microscale problem is solved with respect to transport and capturing of individual particles, and its solution is postprocessed to provide permeability values for macroscale computations. Results from computational experiments with an oil filter are presented and discussed.

  20. Multivariate analysis: A statistical approach for computations

    Science.gov (United States)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  1. Computationally efficient characterization of potential energy surfaces based on fingerprint distances

    Science.gov (United States)

    Schaefer, Bastian; Goedecker, Stefan

    2016-07-01

    An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.

  2. A Dynamic BI–Orthogonal Field Equation Approach to Efficient Bayesian Inversion

    Directory of Open Access Journals (Sweden)

    Tagade Piyush M.

    2017-06-01

    Full Text Available This paper proposes a novel computationally efficient stochastic spectral projection based approach to Bayesian inversion of a computer simulator with high dimensional parametric and model structure uncertainty. The proposed method is based on the decomposition of the solution into its mean and a random field using a generic Karhunen-Loève expansion. The random field is represented as a convolution of separable Hilbert spaces in stochastic and spatial dimensions that are spectrally represented using respective orthogonal bases. In particular, the present paper investigates generalized polynomial chaos bases for the stochastic dimension and eigenfunction bases for the spatial dimension. Dynamic orthogonality is used to derive closed-form equations for the time evolution of mean, spatial and the stochastic fields. The resultant system of equations consists of a partial differential equation (PDE that defines the dynamic evolution of the mean, a set of PDEs to define the time evolution of eigenfunction bases, while a set of ordinary differential equations (ODEs define dynamics of the stochastic field. This system of dynamic evolution equations efficiently propagates the prior parametric uncertainty to the system response. The resulting bi-orthogonal expansion of the system response is used to reformulate the Bayesian inference for efficient exploration of the posterior distribution. The efficacy of the proposed method is investigated for calibration of a 2D transient diffusion simulator with an uncertain source location and diffusivity. The computational efficiency of the method is demonstrated against a Monte Carlo method and a generalized polynomial chaos approach.

  3. Aluminium in Biological Environments: A Computational Approach

    Science.gov (United States)

    Mujika, Jon I; Rezabal, Elixabete; Mercero, Jose M; Ruipérez, Fernando; Costa, Dominique; Ugalde, Jesus M; Lopez, Xabier

    2014-01-01

    The increased availability of aluminium in biological environments, due to human intervention in the last century, raises concerns on the effects that this so far “excluded from biology” metal might have on living organisms. Consequently, the bioinorganic chemistry of aluminium has emerged as a very active field of research. This review will focus on our contributions to this field, based on computational studies that can yield an understanding of the aluminum biochemistry at a molecular level. Aluminium can interact and be stabilized in biological environments by complexing with both low molecular mass chelants and high molecular mass peptides. The speciation of the metal is, nonetheless, dictated by the hydrolytic species dominant in each case and which vary according to the pH condition of the medium. In blood, citrate and serum transferrin are identified as the main low molecular mass and high molecular mass molecules interacting with aluminium. The complexation of aluminium to citrate and the subsequent changes exerted on the deprotonation pathways of its tritable groups will be discussed along with the mechanisms for the intake and release of aluminium in serum transferrin at two pH conditions, physiological neutral and endosomatic acidic. Aluminium can substitute other metals, in particular magnesium, in protein buried sites and trigger conformational disorder and alteration of the protonation states of the protein's sidechains. A detailed account of the interaction of aluminium with proteic sidechains will be given. Finally, it will be described how alumnium can exert oxidative stress by stabilizing superoxide radicals either as mononuclear aluminium or clustered in boehmite. The possibility of promotion of Fenton reaction, and production of hydroxyl radicals will also be discussed. PMID:24757505

  4. The green computing book tackling energy efficiency at large scale

    CERN Document Server

    Feng, Wu-chun

    2014-01-01

    Low-Power, Massively Parallel, Energy-Efficient Supercomputers The Blue Gene TeamCompiler-Driven Energy Efficiency Mahmut Kandemir and Shekhar Srikantaiah An Adaptive Run-Time System for Improving Energy Efficiency Chung-Hsing Hsu, Wu-chun Feng, and Stephen W. PooleEnergy-Efficient Multithreading through Run-Time Adaptation Exploring Trade-Offs between Energy Savings and Reliability in Storage Systems Ali R. Butt, Puranjoy Bhattacharjee, Guanying Wang, and Chris GniadyCross-Layer Power Management Zhikui Wang and Parthasarathy Ranganathan Energy-Efficient Virtualized Systems Ripal Nathuji and K

  5. Computationally Efficient Iterative Pose Estimation for Space Robot Based on Vision

    Directory of Open Access Journals (Sweden)

    Xiang Wu

    2013-01-01

    Full Text Available In postestimation problem for space robot, photogrammetry has been used to determine the relative pose between an object and a camera. The calculation of the projection from two-dimensional measured data to three-dimensional models is of utmost importance in this vision-based estimation however, this process is usually time consuming, especially in the outer space environment with limited performance of hardware. This paper proposes a computationally efficient iterative algorithm for pose estimation based on vision technology. In this method, an error function is designed to estimate the object-space collinearity error, and the error is minimized iteratively for rotation matrix based on the absolute orientation information. Experimental result shows that this approach achieves comparable accuracy with the SVD-based methods; however, the computational time has been greatly reduced due to the use of the absolute orientation method.

  6. An efficient method for computing genus expansions and counting numbers in the Hermitian matrix model

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez, Gabriel, E-mail: galvarez@fis.ucm.e [Departamento de Fisica Teorica II, Facultad de Ciencias Fisicas, Universidad Complutense, 28040 Madrid (Spain); Martinez Alonso, Luis, E-mail: luism@fis.ucm.e [Departamento de Fisica Teorica II, Facultad de Ciencias Fisicas, Universidad Complutense, 28040 Madrid (Spain); Medina, Elena, E-mail: elena.medina@uca.e [Departamento de Matematicas, Facultad de Ciencias, Universidad de Cadiz, 11510 Puerto Real, Cadiz (Spain)

    2011-07-11

    We present a method to compute the genus expansion of the free energy of Hermitian matrix models from the large N expansion of the recurrence coefficients of the associated family of orthogonal polynomials. The method is based on the Bleher-Its deformation of the model, on its associated integral representation of the free energy, and on a method for solving the string equation which uses the resolvent of the Lax operator of the underlying Toda hierarchy. As a byproduct we obtain an efficient algorithm to compute generating functions for the enumeration of labeled k-maps which does not require the explicit expressions of the coefficients of the topological expansion. Finally we discuss the regularization of singular one-cut models within this approach.

  7. Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime

    CERN Document Server

    Cowan, B M; Beck, A; Davoine, X; Bunkers, K; Lifschitz, A F; Lefebvre, E; Bruhwiler, D L; Shadwick, B A; Umstadter, D P

    2012-01-01

    Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100 terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, three-dimensional particle-in-cell modelling are examined. First, the Cartesian code VORPAL using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code CALDER-CIRC uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two mo...

  8. Mobile Cloud Computing: A Review on Smartphone Augmentation Approaches

    CERN Document Server

    Abolfazli, Saeid; Gani, Abdullah

    2012-01-01

    Smartphones have recently gained significant popularity in heavy mobile processing while users are increasing their expectations toward rich computing experience. However, resource limitations and current mobile computing advancements hinder this vision. Therefore, resource-intensive application execution remains a challenging task in mobile computing that necessitates device augmentation. In this article, smartphone augmentation approaches are reviewed and classified in two main groups, namely hardware and software. Generating high-end hardware is a subset of hardware augmentation approaches, whereas conserving local resource and reducing resource requirements approaches are grouped under software augmentation methods. Our study advocates that consreving smartphones' native resources, which is mainly done via task offloading, is more appropriate for already-developed applications than new ones, due to costly re-development process. Cloud computing has recently obtained momentous ground as one of the major co...

  9. Convergence Analysis of a Class of Computational Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Junfeng Chen

    2013-01-01

    Full Text Available Computational intelligence approaches is a relatively new interdisciplinary field of research with many promising application areas. Although the computational intelligence approaches have gained huge popularity, it is difficult to analyze the convergence. In this paper, a computational model is built up for a class of computational intelligence approaches represented by the canonical forms of generic algorithms, ant colony optimization, and particle swarm optimization in order to describe the common features of these algorithms. And then, two quantification indices, that is, the variation rate and the progress rate, are defined, respectively, to indicate the variety and the optimality of the solution sets generated in the search process of the model. Moreover, we give four types of probabilistic convergence for the solution set updating sequences, and their relations are discussed. Finally, the sufficient conditions are derived for the almost sure weak convergence and the almost sure strong convergence of the model by introducing the martingale theory into the Markov chain analysis.

  10. What is intrinsic motivation? A typology of computational approaches

    Directory of Open Access Journals (Sweden)

    Pierre-Yves Oudeyer

    2009-11-01

    Full Text Available Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics.

  11. Match and Move, an Approach to Data Parallel Computing

    Science.gov (United States)

    1992-10-01

    Blelloch, Siddhartha Chatterjee, Jay Sippelstein, and Marco Zagha. CVL: a C Vector Library. School of Computer Science, Carnegie Mellon University...CBZ90] Siddhartha Chatterjee, Guy E. Blelloch, and Marco Zagha. Scan primitives for vector computers. In Proceedings Supercomputing 󈨞, November 1990...Cha91] Siddhartha Chatterjee. Compiling data-parallel programs for efficient execution on shared-memory multiprocessors. PhD thesis, Carnegie Mellon

  12. 高效使用电脑,提高工作效率%Using Computer Efficiently and Improving Working Efficiency

    Institute of Scientific and Technical Information of China (English)

    王伟

    2012-01-01

    文章结合实际工作,提出了高效使用电脑提高工作效率的方式方法.%The method that how to use computer efficiently to improve working efficiency is presented integrated with practice.

  13. Efficient quantum-classical method for computing thermal rate constant of recombination: application to ozone formation.

    Science.gov (United States)

    Ivanov, Mikhail V; Babikov, Dmitri

    2012-05-14

    Efficient method is proposed for computing thermal rate constant of recombination reaction that proceeds according to the energy transfer mechanism, when an energized molecule is formed from reactants first, and is stabilized later by collision with quencher. The mixed quantum-classical theory for the collisional energy transfer and the ro-vibrational energy flow [M. Ivanov and D. Babikov, J. Chem. Phys. 134, 144107 (2011)] is employed to treat the dynamics of molecule + quencher collision. Efficiency is achieved by sampling simultaneously (i) the thermal collision energy, (ii) the impact parameter, and (iii) the incident direction of quencher, as well as (iv) the rotational state of energized molecule. This approach is applied to calculate third-order rate constant of the recombination reaction that forms the (16)O(18)O(16)O isotopomer of ozone. Comparison of the predicted rate vs. experimental result is presented.

  14. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    Directory of Open Access Journals (Sweden)

    Usman Khan

    2014-04-01

    Full Text Available Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h. The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication.

  15. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    Science.gov (United States)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  16. Efficient Quantification of Uncertainties in Complex Computer Code Results Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...

  17. An Efficient Security Approach Using PGE and Parity Coding

    Directory of Open Access Journals (Sweden)

    Ch. Rupa

    2012-12-01

    Full Text Available Information Attacks are showing the weaknesses of Information security due to the rapid growth of theglobalisation. The main aim of these attacks is to retrieve the information by illegal that shows the faultsin the security services. In this paper, we introduce a novel secure steganographic approach for defendingagainst these information attacks. In this approach, instead of original message an encrypted message byPrime Number and Gray Code Encryption (PGE Algorithm is hidden into an Image (Stego Image usinga new approach named Linear Block parity coding (LBP which provides more security than conventionalapproaches. The major strength of this paper is steganalysis has discussed. The computational complexityis comparatively low with other methods since our feature vector space is limited interference is notobjectionable.

  18. An Integrated Computer-Aided Approach for Environmental Studies

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Chen, Fei; Jaksland, Cecilia;

    1997-01-01

    A general framework for an integrated computer-aided approach to solve process design, control, and environmental problems simultaneously is presented. Physicochemical properties and their relationships to the molecular structure play an important role in the proposed integrated approach. The scope...... and applicability of the integrated approach is highlighted through examples involving estimation of properties and environmental pollution prevention. The importance of mixture effects on some environmentally important properties is also demonstrated....

  19. Computationally efficient multidimensional analysis of complex flow cytometry data using second order polynomial histograms.

    Science.gov (United States)

    Zaunders, John; Jing, Junmei; Leipold, Michael; Maecker, Holden; Kelleher, Anthony D; Koch, Inge

    2016-01-01

    Many methods have been described for automated clustering analysis of complex flow cytometry data, but so far the goal to efficiently estimate multivariate densities and their modes for a moderate number of dimensions and potentially millions of data points has not been attained. We have devised a novel approach to describing modes using second order polynomial histogram estimators (SOPHE). The method divides the data into multivariate bins and determines the shape of the data in each bin based on second order polynomials, which is an efficient computation. These calculations yield local maxima and allow joining of adjacent bins to identify clusters. The use of second order polynomials also optimally uses wide bins, such that in most cases each parameter (dimension) need only be divided into 4-8 bins, again reducing computational load. We have validated this method using defined mixtures of up to 17 fluorescent beads in 16 dimensions, correctly identifying all populations in data files of 100,000 beads in analysis, and up to 65 subpopulations of PBMC in 33-dimensional CyTOF data, showing its usefulness in discovery research. SOPHE has the potential to greatly increase efficiency of analysing complex mixtures of cells in higher dimensions.

  20. Loss tolerant one-way quantum computation -- a horticultural approach

    CERN Document Server

    Varnava, M; Rudolph, T; Varnava, Michael; Browne, Daniel E.; Rudolph, Terry

    2005-01-01

    We introduce a scheme for fault tolerantly dealing with losses in cluster state computation that can tolerate up to 50% qubit loss. This is achieved passively - no coherent measurements or coherent correction is required. We then use this procedure within a specific linear optical quantum computation proposal to show that: (i) given perfect sources, detector inefficiencies of up to 50% can be tolerated and (ii) given perfect detectors, the purity of the photon source (overlap of the photonic wavefunction with the desired single mode) need only be greater than 66.6% for efficient computation to be possible.

  1. Towards an efficient two-scale approach to model technical textiles

    Science.gov (United States)

    Fillep, Sebastian; Mergheim, Julia; Steinmann, Paul

    2017-03-01

    The paper proposes and investigates an efficient two-scale approach to describe the material behavior of technical textiles. On the macroscopic scale the considered textile materials are modeled as homogeneous by means of shell elements. The heterogeneous microstructure, which consists e.g. of woven fibers, is explicitly resolved in representative volume elements (RVE). A shell-specific homogenization scheme is applied to connect the macro and the micro scale. The simultaneous solution of the macroscopic and the nonlinear microscopic simulations, e.g. by means of the FE^2-method, is very expensive. Therefore, a different approach is applied here: the macro constitutive response is computed in advance and tabulated for a certain RVE and for different loading scenarios. These homogenized stress and tangent values are then used in a macroscopic simulation without the need to explicitly resort to the microscopic simulations. The efficiency of the approach is analyzed by means of numerical examples.

  2. Towards an efficient two-scale approach to model technical textiles

    Science.gov (United States)

    Fillep, Sebastian; Mergheim, Julia; Steinmann, Paul

    2016-11-01

    The paper proposes and investigates an efficient two-scale approach to describe the material behavior of technical textiles. On the macroscopic scale the considered textile materials are modeled as homogeneous by means of shell elements. The heterogeneous microstructure, which consists e.g. of woven fibers, is explicitly resolved in representative volume elements (RVE). A shell-specific homogenization scheme is applied to connect the macro and the micro scale. The simultaneous solution of the macroscopic and the nonlinear microscopic simulations, e.g. by means of the FE^2 -method, is very expensive. Therefore, a different approach is applied here: the macro constitutive response is computed in advance and tabulated for a certain RVE and for different loading scenarios. These homogenized stress and tangent values are then used in a macroscopic simulation without the need to explicitly resort to the microscopic simulations. The efficiency of the approach is analyzed by means of numerical examples.

  3. Efficient Quantification of Uncertainties in Complex Computer Code Results Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal addresses methods for efficient quantification of margins and uncertainties (QMU) for models that couple multiple, large-scale commercial or...

  4. Experiences with Efficient Methodologies for Teaching Computer Programming to Geoscientists

    Science.gov (United States)

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-01-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students…

  5. An efficient algorithm for computing the H-infinity norm

    NARCIS (Netherlands)

    Belur, Madhu N.; Praagman, C.

    2011-01-01

    This technical note addresses the computation of the H-infinity norm by directly computing the isolated common zeros of two bivariate polynomials, unlike the iteration algorithm that is currently used to find the H-infinity norm. The proposed method to H-infinity norm calculation is compared with th

  6. Efficient computation of steady, 3D water-wave patterns

    NARCIS (Netherlands)

    Lewis, M.R.; Koren, B.

    2003-01-01

    Numerical methods for the computation of stationary free surfaces is the subject of much current research in computational engineering. The present report is directed towards free surfaces in maritime engineering. Of interest here are the long steady waves generated by ships, the gravity waves. In t

  7. An Efficient Audio Classification Approach Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Lhoucine Bahatti

    2016-05-01

    Full Text Available In order to achieve an audio classification aimed to identify the composer, the use of adequate and relevant features is important to improve performance especially when the classification algorithm is based on support vector machines. As opposed to conventional approaches that often use timbral features based on a time-frequency representation of the musical signal using constant window, this paper deals with a new audio classification method which improves the features extraction according the Constant Q Transform (CQT approach and includes original audio features related to the musical context in which the notes appear. The enhancement done by this work is also lay on the proposal of an optimal features selection procedure which combines filter and wrapper strategies. Experimental results show the accuracy and efficiency of the adopted approach in the binary classification as well as in the multi-class classification.

  8. A radial basis function network approach for the computation of inverse continuous time variant functions.

    Science.gov (United States)

    Mayorga, René V; Carrera, Jonathan

    2007-06-01

    This Paper presents an efficient approach for the fast computation of inverse continuous time variant functions with the proper use of Radial Basis Function Networks (RBFNs). The approach is based on implementing RBFNs for computing inverse continuous time variant functions via an overall damped least squares solution that includes a novel null space vector for singularities prevention. The singularities avoidance null space vector is derived from developing a sufficiency condition for singularities prevention that conduces to establish some characterizing matrices and an associated performance index.

  9. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational...... thinking. We present two main theses on which the subject is based, and we present the included knowledge areas and didactical design principles. Finally we summarize the status and future plans for the subject and related development projects....

  10. Energy efficiency and the law: A multidisciplinary approach

    Directory of Open Access Journals (Sweden)

    Willemien du Plessis

    2015-01-01

    Full Text Available South Africa is an energy-intensive country. The inefficient use of, mostly, coal-generated energy is the cause of South Africa's per capita contribution to greenhouse gas emissions, pollution and environmental degradation and negative health impacts. The inefficient use of the country's energy also amounts to the injudicious use of natural resources. Improvements in energy efficiency are an important strategy to stabilise the country's energy crisis. Government responded to this challenge by introducing measures such as policies and legislation to change energy consumption patterns by, amongst others, incentivising the transition to improved energy efficiencies. A central tenet underpinning this review is that the law and energy nexus requires a multidisciplinary approach as well as a multi-pronged adoption of diverse policy instruments to effectively transform the country's energy use patterns. Numerous, innovative instruments are introduced by relevant legislation to encourage the transformation of energy generation and consumption patterns of South Africans. One such innovative instrument is the ISO 50001 energy management standard. It is a voluntary instrument, to plan for, measure and verify energy-efficiency improvements. These improvements may also trigger tax concessions. In this paper, the nature and extent of the various policy instruments and legislation that relate to energy efficiency are explored, while the interactions between the law and the voluntary ISO 50001 standard and between the law and the other academic disciplines are highlighted. The introduction of energy-efficiency measures into law requires a multidisciplinary approach, as lawyers may be challenged to address the scientific and technical elements that characterise these legal measures and instruments. Inputs by several other disciplines such as engineering, mathematics or statistics, accounting, environmental management and auditing may be needed. Law is often

  11. COMPUTATIONAL EFFICIENCY OF A MODIFIED SCATTERING KERNEL FOR FULL-COUPLED PHOTON-ELECTRON TRANSPORT PARALLEL COMPUTING WITH UNSTRUCTURED TETRAHEDRAL MESHES

    Directory of Open Access Journals (Sweden)

    JONG WOON KIM

    2014-04-01

    In this paper, we introduce a modified scattering kernel approach to avoid the unnecessarily repeated calculations involved with the scattering source calculation, and used it with parallel computing to effectively reduce the computation time. Its computational efficiency was tested for three-dimensional full-coupled photon-electron transport problems using our computer program which solves the multi-group discrete ordinates transport equation by using the discontinuous finite element method with unstructured tetrahedral meshes for complicated geometrical problems. The numerical tests show that we can improve speed up to 17∼42 times for the elapsed time per iteration using the modified scattering kernel, not only in the single CPU calculation but also in the parallel computing with several CPUs.

  12. Energy efficient computing exploiting the properties of light

    Science.gov (United States)

    Shamir, Joseph

    2013-09-01

    Significant reduction of energy dissipation in computing can be achieved by addressing the theoretical lower limit of energy consumption and replacing arrays of traditional Boolean logic gates by other methods of implementing logic operations. In particular, a slight modification of the concept of computing allows the incorporation of fundamentally lossless optical processes as part of the computing operation. While the introduced new concepts can be implemented electronically or by other means, using optics eliminates also energy dissipation involved in the translation of electric charges. A possible realization of the indicated concepts is based on directed logic networks composed of reversible optical logic gate arrays.

  13. Efficient Data-parallel Computations on Distributed Systems

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Task scheduling determines the performance of NOW computing to a large extent.However,the computer system architecture, computing capability and sys tem load are rarely proposed together.In this paper,a biggest-heterogeneous scheduling algorithm is presented.It fully considers the system characterist ics (from application view), structure and state.So it always can utilize all processing resource under a reasonable premise.The results of experiment show the algorithm can significantly shorten the response time of jobs.

  14. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    Science.gov (United States)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  15. Computationally efficient permutation-based confidence interval estimation for tail-area FDR

    Directory of Open Access Journals (Sweden)

    Joshua eMillstein

    2013-09-01

    Full Text Available Challenges of satisfying parametric assumptions in genomic settings with thousands or millions of tests have led investigators to combine powerful False Discovery Rate (FDR approaches with computationally expensive but exact permutation testing. We describe a computationally efficient permutation-based approach that includes a tractable estimator of the proportion of true null hypotheses, the variance of the log of tail-area FDR, and a confidence interval (CI estimator, which accounts for the number of permutations conducted and dependencies between tests. The CI estimator applies a binomial distribution and an overdispersion parameter to counts of positive tests. The approach is general with regards to the distribution of the test statistic, it performs favorably in comparison to other approaches, and reliable FDR estimates are demonstrated with as few as 10 permutations. An application of this approach to relate sleep patterns to gene expression patterns in mouse hypothalamus yielded a set of 11 transcripts associated with 24 hour REM sleep (FDR = .15 (.08, .26. Two of the corresponding genes, Sfrp1 and Sfrp4, are involved in wnt signaling and several others, Irf7, Ifit1, Iigp2, and Ifih1, have links to interferon signaling. These genes would have been overlooked had a typical a priori FDR threshold such as 0.05 or 0.1 been applied. The CI provides the flexibility for choosing a significance threshold based on tolerance for false discoveries and precision of the FDR estimate. That is, it frees the investigator to use a more data-driven approach to define significance, such as the minimum estimated FDR, an option that is especially useful for weak effects, often observed in studies of complex diseases.

  16. An efficient algorithm for nucleolus and prekernel computation in some classes of TU-games

    NARCIS (Netherlands)

    Faigle, U.; Kern, W.; Kuipers, J.

    1998-01-01

    We consider classes of TU-games. We show that we can efficiently compute an allocation in the intersection of the prekernel and the least core of the game if we can efficiently compute the minimum excess for any given allocation. In the case where the prekernel of the game contains exactly one core

  17. Energy conversion approaches and materials for high-efficiency photovoltaics

    Science.gov (United States)

    Green, Martin A.; Bremner, Stephen P.

    2017-01-01

    The past five years have seen significant cost reductions in photovoltaics and a correspondingly strong increase in uptake, with photovoltaics now positioned to provide one of the lowest-cost options for future electricity generation. What is becoming clear as the industry develops is that area-related costs, such as costs of encapsulation and field-installation, are increasingly important components of the total costs of photovoltaic electricity generation, with this trend expected to continue. Improved energy-conversion efficiency directly reduces such costs, with increased manufacturing volume likely to drive down the additional costs associated with implementing higher efficiencies. This suggests the industry will evolve beyond the standard single-junction solar cells that currently dominate commercial production, where energy-conversion efficiencies are fundamentally constrained by Shockley-Queisser limits to practical values below 30%. This Review assesses the overall prospects for a range of approaches that can potentially exceed these limits, based on ultimate efficiency prospects, material requirements and developmental outlook.

  18. Computational experiment approach to advanced secondary mathematics curriculum

    CERN Document Server

    Abramovich, Sergei

    2014-01-01

    This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...

  19. An approach to computing direction relations between separated object groups

    Science.gov (United States)

    Yan, H.; Wang, Z.; Li, J.

    2013-09-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  20. A tale of three bio-inspired computational approaches

    Science.gov (United States)

    Schaffer, J. David

    2014-05-01

    I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.

  1. The Formal Approach to Computer Game Rule Development Automation

    OpenAIRE

    Elena, A

    2009-01-01

    Computer game rules development is one of the weakly automated tasks in game development. This paper gives an overview of the ongoing research project which deals with automation of rules development for turn-based strategy computer games. Rules are the basic elements of these games. This paper proposes a new approach to automation including visual formal rules model creation, model verification and modelbased code generation.

  2. The process group approach to reliable distributed computing

    Science.gov (United States)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  3. An efficient computational strategy for composite laminates assemblies including variability

    National Research Council Canada - National Science Library

    Roulet, V; Boucard, P.-A; Champaney, L

    2013-01-01

    The aim of this work is to present an efficient numerical strategy for studying the influence of the material parameters on problems involving 3D assemblies of composite parts with contact and friction...

  4. Efficient matrix approach to optical wave propagation and Linear Canonical Transforms.

    Science.gov (United States)

    Shakir, Sami A; Fried, David L; Pease, Edwin A; Brennan, Terry J; Dolash, Thomas M

    2015-10-01

    The Fresnel diffraction integral form of optical wave propagation and the more general Linear Canonical Transforms (LCT) are cast into a matrix transformation form. Taking advantage of recent efficient matrix multiply algorithms, this approach promises an efficient computational and analytical tool that is competitive with FFT based methods but offers better behavior in terms of aliasing, transparent boundary condition, and flexibility in number of sampling points and computational window sizes of the input and output planes being independent. This flexibility makes the method significantly faster than FFT based propagators when only a single point, as in Strehl metrics, or a limited number of points, as in power-in-the-bucket metrics, are needed in the output observation plane.

  5. Development of a computationally efficient urban flood modelling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Ntegeka, Victor; Murla, Damian

    the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...

  6. Computational approaches for efficiently modelling of small atmospheric clusters

    DEFF Research Database (Denmark)

    Elm, Jonas; Mikkelsen, Kurt Valentin

    2014-01-01

    Utilizing a comprehensive test set of 205 clusters of atmospheric relevance, we investigate how different DFT functionals (M06-2X, PW91, ωB97X-D) and basis sets (6-311++G(3df,3pd), 6-31++G(d,p), 6-31+G(d)) affect the thermal contribution to the Gibbs free energy and single point energy. Reducing...

  7. Computational biomechanics for medicine new approaches and new applications

    CERN Document Server

    Miller, Karol; Wittek, Adam; Nielsen, Poul

    2015-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologiesand advancements. Thisvolumecomprises twelve of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, France, Spain and Switzerland. Some of the interesting topics discussed are:real-time simulations; growth and remodelling of soft tissues; inverse and meshless solutions; medical image analysis; and patient-specific solid mechanics simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  8. A distributed computing approach to mission operations support. [for spacecraft

    Science.gov (United States)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  9. Improving Computational Efficiency of Model Predictive Control Genetic Algorithms for Real-Time Decision Support

    Science.gov (United States)

    Minsker, B. S.; Zimmer, A. L.; Ostfeld, A.; Schmidt, A.

    2014-12-01

    Enabling real-time decision support, particularly under conditions of uncertainty, requires computationally efficient algorithms that can rapidly generate recommendations. In this paper, a suite of model predictive control (MPC) genetic algorithms are developed and tested offline to explore their value for reducing CSOs during real-time use in a deep-tunnel sewer system. MPC approaches include the micro-GA, the probability-based compact GA, and domain-specific GA methods that reduce the number of decision variable values analyzed within the sewer hydraulic model, thus reducing algorithm search space. Minimum fitness and constraint values achieved by all GA approaches, as well as computational times required to reach the minimum values, are compared to large population sizes with long convergence times. Optimization results for a subset of the Chicago combined sewer system indicate that genetic algorithm variations with coarse decision variable representation, eventually transitioning to the entire range of decision variable values, are most efficient at addressing the CSO control problem. Although diversity-enhancing micro-GAs evaluate a larger search space and exhibit shorter convergence times, these representations do not reach minimum fitness and constraint values. The domain-specific GAs prove to be the most efficient and are used to test CSO sensitivity to energy costs, CSO penalties, and pressurization constraint values. The results show that CSO volumes are highly dependent on the tunnel pressurization constraint, with reductions of 13% to 77% possible with less conservative operational strategies. Because current management practices may not account for varying costs at CSO locations and electricity rate changes in the summer and winter, the sensitivity of the results is evaluated for variable seasonal and diurnal CSO penalty costs and electricity-related system maintenance costs, as well as different sluice gate constraint levels. These findings indicate

  10. EFFICIENT RETRIEVAL TECHNIQUES FOR IMAGES USING ENHANCED UNIVARIATE TRANSFORMATION APPROACH

    Directory of Open Access Journals (Sweden)

    Raghbendra Singh

    2010-08-01

    Full Text Available In this paper author presented a simple design approach for the design of RF section which consist a slow wave structure (SWS and input/output couplers for a Ka-band (20.6-21.2GHz 40W helix Traveling Wave Tube. For simulation of SWS three software CST MWS, ANSOFT HFSS and in house developed SUNRAY-1D have been used for meeting the desired power (>40W, gain (>45dB and electronic efficiency (>17%. for Simulation of coupler with SWS ANSOFT HFSS and CST MS Software used. In the analysis of coupler section the VSWR <1.2 has been achieved.

  11. Limits on efficient computation in the physical world

    Science.gov (United States)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure

  12. Efficient algorithm and computing tool for shading calculation

    Directory of Open Access Journals (Sweden)

    Chanadda Pongpattana

    2006-03-01

    Full Text Available The window is always part of a building envelope. It earns its respect in creating architectural elegance of a building. Despite a major advantage of daylight utilization, a window would inevitably allow heat from solar radiation to penetrate into a building. Hence, a window design must be performed under a careful consideration in order to achieve an energy-conscious design for which the daylight utilization and heat gain are optimized. This paper presents the validation of the vectorial formulation of shading calculation by comparing the computational results with experimental ones, overhang, fin, and eggcrate. A computational algorithm and interactive computer software for computing the shadow were developed. The software was designed in order to be user-friendly and capable of presenting profiles of the shadow graphically and computing corresponding shaded areas for a given window system. It was found that software simulation results were in excellent agreement with experimental results. The average percentage of error is approximately 0.25%, 0.52%, and 0.21% for overhang, fin, and eggcrate, respectively.

  13. An efficient numerical integral in three-dimensional electromagnetic field computations

    Science.gov (United States)

    Whetten, Frank L.; Liu, Kefeng; Balanis, Constantine A.

    1990-01-01

    An improved algorithm for efficiently computing a sinusoid and an exponential integral commonly encountered in method-of-moments solutions is presented. The new algorithm has been tested for accuracy and computer execution time against both numerical integration and other existing numerical algorithms, and has outperformed them. Typical execution time comparisons on several computers are given.

  14. [A motivational approach of cognitive efficiency in nursing home residents].

    Science.gov (United States)

    Clément, Evelyne; Vivicorsi, Bruno; Altintas, Emin; Guerrien, Alain

    2014-06-01

    Despite a widespread concern with self-determined motivation (behavior is engaged in "out of pleasure" or "out of choice and valued as being important") and psychological adjustment in later life (well-being, satisfaction in life, meaning of life, or self-esteem), very little is known about the existence and nature of the links between self-determined motivation and cognitive efficiency. The aim of the present study was to investigate theses links in nursing home residents in the framework of the Self-determination theory (SDT) (Deci & Ryan, 2002), in which motivational profile of a person is determined by the combination of different kinds of motivation. We hypothesized that self-determined motivation would lead to higher cognitive efficiency. Participants. 39 (32 women and 7 men) elderly nursing home residents (m= 83.6 ± 9.3 year old) without any neurological or psychiatric disorders (DSM IV) or depression or anxiety (Hamilton depression rating scales) were included in the study. Methods. Cognitive efficiency was evaluated by two brief neuropsychological tests, the Mini mental state examination (MMSE) and the Frontal assessment battery (FAB). The motivational profile was assessed by the Elderly motivation scale (Vallerand & 0'Connor, 1991) which includes four subscales assessing self- and non-self determined motivation to engage oneself in different domains of daily life activity. Results. The neuropsychological scores were positively and significantly correlated to self-determined extrinsic motivation (behavior is engaged in "out of choice" and valued as being important), and the global self-determination index (self-determined motivational profile) was the best predictor of the cognitive efficiency. Conclusion. The results support the SDT interest for a qualitative assessment of the motivation of the elderly people and suggest that a motivational approach of cognitive efficiency could help to interpret cognitive performances exhibited during neuropsychological

  15. Work-Efficient Parallel Skyline Computation for the GPU

    DEFF Research Database (Denmark)

    Bøgh, Kenneth Sejdenfaden; Chester, Sean; Assent, Ira

    2015-01-01

    The skyline operator returns records in a dataset that provide optimal trade-offs of multiple dimensions. State-of-the-art skyline computation involves complex tree traversals, data-ordering, and conditional branching to minimize the number of point-to-point comparisons. Meanwhile, GPGPU computing...... a global, static partitioning scheme. With the partitioning, we can permit controlled branching to exploit transitive relationships and avoid most point-to-point comparisons. The result is a non-traditional GPU algorithm, SkyAlign, that prioritizes work-effciency and respectable throughput, rather than...... maximal throughput, to achieve orders of magnitude faster performance....

  16. Energy Efficient Routing in Wireless Sensor Networks: A Genetic Approach

    CERN Document Server

    Chakraborty, Ayon; Naskar, Mrinal Kanti

    2011-01-01

    The key parameters that need to be addressed while designing protocols for sensor networks are its energy awareness and computational feasibility in resource constrained sensor nodes. Variation in the distances of nodes from the Base Station and differences in inter-nodal distances are primary factors causing unequal energy dissipation among the nodes. Thus energy difference among the nodes increases with time resulting in degraded network performance. The LEACH and PEGASIS schemes which provided elegant solutions to the problem suffer due to randomization of cluster heads and greedy chain formation respectively. In this paper, we propose a Genetic algorithm inspired ROUting Protocol (GROUP) which shows enhanced performance in terms of energy efficiency and network lifetime over other schemes. GROUP increases the network performance by ensuring a sub-optimal energy dissipation of the individual nodes despite their random deployment. It employs modern heuristics like Genetic Algorithms along with Simulated Ann...

  17. Probabilistic structural analysis algorithm development for computational efficiency

    Science.gov (United States)

    Wu, Y.-T.

    1991-01-01

    The PSAM (Probabilistic Structural Analysis Methods) program is developing a probabilistic structural risk assessment capability for the SSME components. An advanced probabilistic structural analysis software system, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), is being developed as part of the PSAM effort to accurately simulate stochastic structures operating under severe random loading conditions. One of the challenges in developing the NESSUS system is the development of the probabilistic algorithms that provide both efficiency and accuracy. The main probability algorithms developed and implemented in the NESSUS system are efficient, but approximate in nature. In the last six years, the algorithms have improved very significantly.

  18. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    Science.gov (United States)

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  19. Efficient computation of exposure profiles for counterparty credit risk

    NARCIS (Netherlands)

    Graaf, C.S.L. de; Feng, Q.; Kandhai, B.D.; Oosterlee, C.W.

    2014-01-01

    Three computational techniques for approximation of counterparty exposure for financial derivatives are presented. The exposure can be used to quantify so-called Credit Valuation Adjustment (CVA) and Potential Future Exposure (PFE), which are of utmost importance for modern risk management in the fi

  20. Efficient Computation of Exposure Profiles for Counterparty Credit Risk

    NARCIS (Netherlands)

    de Graaf, C.S.L.; Feng, Q.; Kandhai, D.; Oosterlee, C.W.

    2014-01-01

    Three computational techniques for approximation of counterparty exposure for financial derivatives are presented. The exposure can be used to quantify so-called Credit Valuation Adjustment (CVA) and Potential Future Exposure (PFE), which are of utmost importance for modern risk management in the fi

  1. Efficient Computation of Exposure Profiles for Counterparty Credit Risk

    NARCIS (Netherlands)

    de Graaf, C.S.L.; Feng, Q.; Kandhai, D.; Oosterlee, C.W.

    2014-01-01

    Three computational techniques for approximation of counterparty exposure for financial derivatives are presented. The exposure can be used to quantify so-called Credit Valuation Adjustment (CVA) and Potential Future Exposure (PFE), which are of utmost importance for modern risk management in the

  2. Efficient computation of exposure profiles for counterparty credit risk

    NARCIS (Netherlands)

    C.S.L. de Graaf (Kees); Q. Feng (Qian); B.D. Kandhai; C.W. Oosterlee (Cornelis)

    2014-01-01

    htmlabstractThree computational techniques for approximation of counterparty exposure for financial derivatives are presented. The exposure can be used to quantify so-called Credit Valuation Adjustment (CVA) and Potential Future Exposure (PFE), which are of utmost importance for modern risk

  3. Nanobiotechnology: an efficient approach to drug delivery of unstable biomolecules.

    Science.gov (United States)

    Amaral, A C; Felipe, M S S

    2013-11-01

    Biotechnology and nanotechnology are fields of science that can be applied together to solve a variety of biological issues. In the case of human health, biotechnology attempts to improve advances on the therapy against several diseases. Therapeutic peptides and proteins are promissory molecules for developing new medicines. Gene transfection and RNA interference have been considered important approaches for modern therapy to treat cancer and viral infections. However, because of their instability, these molecules alone cannot be used for in vivo application, since they are easily degraded or presenting a poor efficiency. Nanotechnology can contribute by the development of nanostructured delivery systems to increase the stability and potency of these molecules. Studies involving polymeric and magnetic nanoparticles, dendrimers, and carbon nanotubes have demonstrated a possibility to use these systems as vectors instead of the conventional viral ones, which present adverse effects, such as recombination and immunogenicity. This review presents some possibilities and strategies to efficiently delivery peptides, proteins, gene and RNA interference using nanotechnology approach.

  4. A Robust and Efficient Approach to License Plate Detection.

    Science.gov (United States)

    Yuan, Yule; Zou, Wenbin; Zhao, Yong; Wang, Xinan; Hu, Xuefeng; Komodakis, Nikos

    2017-03-01

    This paper presents a robust and efficient method for license plate detection with the purpose of accurately localizing vehicle license plates from complex scenes in real time. A simple yet effective image downscaling method is first proposed to substantially accelerate license plate localization without sacrificing detection performance compared with that achieved using the original image. Furthermore, a novel line density filter approach is proposed to extract candidate regions, thereby significantly reducing the area to be analyzed for license plate localization. Moreover, a cascaded license plate classifier based on linear support vector machines using color saliency features is introduced to identify the true license plate from among the candidate regions. For performance evaluation, a data set consisting of 3977 images captured from diverse scenes under different conditions is also presented. Extensive experiments on the widely used Caltech license plate data set and our newly introduced data set demonstrate that the proposed approach substantially outperforms state-of-the-art methods in terms of both detection accuracy and run-time efficiency, increasing the detection ratio from 91.09% to 96.62% while decreasing the run time from 672 to 42 ms for processing an image with a resolution of 1082×728 . The executable code and our collected data set are publicly available.

  5. An Efficient Context-Aware Privacy Preserving Approach for Smartphones

    Directory of Open Access Journals (Sweden)

    Lichen Zhang

    2017-01-01

    Full Text Available With the proliferation of smartphones and the usage of the smartphone apps, privacy preservation has become an important issue. The existing privacy preservation approaches for smartphones usually have less efficiency due to the absent consideration of the active defense policies and temporal correlations between contexts related to users. In this paper, through modeling the temporal correlations among contexts, we formalize the privacy preservation problem to an optimization problem and prove its correctness and the optimality through theoretical analysis. To further speed up the running time, we transform the original optimization problem to an approximate optimal problem, a linear programming problem. By resolving the linear programming problem, an efficient context-aware privacy preserving algorithm (CAPP is designed, which adopts active defense policy and decides how to release the current context of a user to maximize the level of quality of service (QoS of context-aware apps with privacy preservation. The conducted extensive simulations on real dataset demonstrate the improved performance of CAPP over other traditional approaches.

  6. Alternative approach for Article 5. Energie Efficiency Directive; Alternatieve aanpak artikel 5. Energy Efficiency Directive

    Energy Technology Data Exchange (ETDEWEB)

    Menkveld, M.; Jablonska, B. [ECN Beleidsstudies, Petten (Netherlands)

    2013-05-15

    Article 5 of the Energy Efficiency Directive (EED) is an annual obligation to renovate 3% of the building stock of central government. After renovation the buildings will meet the minimum energy performance requirements laid down in Article 4 of the EPBD. The Directive gives room to an alternative approach to achieve the same savings. The Ministry of Interior Affairs has asked ECN to assist with this alternative approach. ECN calculated what saving are achieved with the 3% renovation obligation under the directive. Then ECN looked for the possibilities for an alternative approach to achieve the same savings [Dutch] In artikel 5 van de Energie Efficiency Directive (EED) staat een verplichting om jaarlijks 3% van de gebouwvoorraad van de centrale overheid te renoveren. Die 3% van de gebouwvoorraad moet na renovatie voldoen aan de minimum eisen inzake energieprestatie die door het betreffende lidstaat zijn vastgelegd op grond van artikel 4 in de EPBD. De verplichting betreft gebouwen die in bezit en in gebruik zijn van de rijksoverheid met een gebruiksoppervlakte groter dan 500 m{sup 2}, vanaf juli 2015 groter dan 250 m{sup 2}. De gebouwen die eigendom zijn van de Rijksgebouwendienst betreft kantoren van rijksdiensten, gerechtsgebouwen, gebouwen van douane en politie en gevangenissen. Van de gebouwen van Defensie hoeven alleen kantoren en legeringsgebouwen aan de verplichting te voldoen.

  7. Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach

    KAUST Repository

    Amin, Osama

    2015-04-23

    In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.

  8. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  9. On Approaching the Ultimate Limits of Photon-Efficient and Bandwidth-Efficient Optical Communication

    CERN Document Server

    Dolinar, Sam; Erkmen, Baris I; Moision, Bruce

    2011-01-01

    It is well known that ideal free-space optical communication at the quantum limit can have unbounded photon information efficiency (PIE), measured in bits per photon. High PIE comes at a price of low dimensional information efficiency (DIE), measured in bits per spatio-temporal-polarization mode. If only temporal modes are used, then DIE translates directly to bandwidth efficiency. In this paper, the DIE vs. PIE tradeoffs for known modulations and receiver structures are compared to the ultimate quantum limit, and analytic approximations are found in the limit of high PIE. This analysis shows that known structures fall short of the maximum attainable DIE by a factor that increases linearly with PIE for high PIE. The capacity of the Dolinar receiver is derived for binary coherent-state modulations and computed for the case of on-off keying (OOK). The DIE vs. PIE tradeoff for this case is improved only slightly compared to OOK with photon counting. An adaptive rule is derived for an additive local oscillator th...

  10. Efficiency using computer simulation of Reverse Threshold Model Theory on assessing a “One Laptop Per Child” computer versus desktop computer

    Directory of Open Access Journals (Sweden)

    Supat Faarungsang

    2017-04-01

    Full Text Available The Reverse Threshold Model Theory (RTMT model was introduced based on limiting factor concepts, but its efficiency compared to the Conventional Model (CM has not been published. This investigation assessed the efficiency of RTMT compared to CM using computer simulation on the “One Laptop Per Child” computer and a desktop computer. Based on probability values, it was found that RTMT was more efficient than CM among eight treatment combinations and an earlier study verified that RTMT gives complete elimination of random error. Furthermore, RTMT has several advantages over CM and is therefore proposed to be applied to most research data.

  11. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    2011-01-01

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the le

  12. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  13. Efficient protein engineering by combining computational design and directed evolution

    NARCIS (Netherlands)

    Floor, Robert

    2015-01-01

    Het gebruik van enzymatische katalyse kan een belangrijke verduurzaming van de chemie opleveren. Enzymen hebben vaak een hoge selectiviteit en activiteit, waardoor een grondstof efficiënt wordt omgezet in een product. Echter, een belangrijke beperking is dat enzymen vaak niet stabiel genoeg zijn om

  14. Assessing Trustworthiness in Social Media: A Social Computing Approach

    Science.gov (United States)

    2015-11-17

    31-May-2015 Approved for Public Release; Distribution Unlimited Final Report: Assessing Trustworthiness in Social Media : A Social Computing Approach... media . We propose to investigate research issues related to social media trustworthiness and its assessment by leveraging social research methods...attributes of interest associated with a particular social media user related to the received information. This tool provides a way to combine different

  15. A Unitifed Computational Approach to Oxide Aging Processes

    Energy Technology Data Exchange (ETDEWEB)

    Bowman, D.J.; Fleetwood, D.M.; Hjalmarson, H.P.; Schultz, P.A.

    1999-01-27

    In this paper we describe a unified, hierarchical computational approach to aging and reliability problems caused by materials changes in the oxide layers of Si-based microelectronic devices. We apply this method to a particular low-dose-rate radiation effects problem

  16. Pedagogical Approaches to Teaching with Computer Simulations in Science Education

    NARCIS (Netherlands)

    Rutten, N.P.G.; van der Veen, Johan (CTIT); van Joolingen, Wouter; McBride, Ron; Searson, Michael

    2013-01-01

    For this study we interviewed 24 physics teachers about their opinions on teaching with computer simulations. The purpose of this study is to investigate whether it is possible to distinguish different types of teaching approaches. Our results indicate the existence of two types. The first type is

  17. A Computationally Based Approach to Homogenizing Advanced Alloys

    Energy Technology Data Exchange (ETDEWEB)

    Jablonski, P D; Cowen, C J

    2011-02-27

    We have developed a computationally based approach to optimizing the homogenization heat treatment of complex alloys. The Scheil module within the Thermo-Calc software is used to predict the as-cast segregation present within alloys, and DICTRA (Diffusion Controlled TRAnsformations) is used to model the homogenization kinetics as a function of time, temperature and microstructural scale. We will discuss this approach as it is applied to both Ni based superalloys as well as the more complex (computationally) case of alloys that solidify with more than one matrix phase as a result of segregation. Such is the case typically observed in martensitic steels. With these alloys it is doubly important to homogenize them correctly, especially at the laboratory scale, since they are austenitic at high temperature and thus constituent elements will diffuse slowly. The computationally designed heat treatment and the subsequent verification real castings are presented.

  18. Computationally Efficient Implementation of Convolution-based Locally Adaptive Binarization Techniques

    OpenAIRE

    Mollah, Ayatullah Faruk; Basu, Subhadip; Nasipuri, Mita

    2012-01-01

    One of the most important steps of document image processing is binarization. The computational requirements of locally adaptive binarization techniques make them unsuitable for devices with limited computing facilities. In this paper, we have presented a computationally efficient implementation of convolution based locally adaptive binarization techniques keeping the performance comparable to the original implementation. The computational complexity has been reduced from O(W2N2) to O(WN2) wh...

  19. Cross-scale Efficient Tensor Contractions for Coupled Cluster Computations Through Multiple Programming Model Backends

    Energy Technology Data Exchange (ETDEWEB)

    Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Epifanovsky, Evgeny [Q-Chem, Inc., Pleasanton, CA (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Krylov, Anna I. [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Chemistry

    2016-07-26

    Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts to extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.

  20. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  1. Efficient Model for Distributed Computing based on Smart Embedded Agent

    Directory of Open Access Journals (Sweden)

    Hassna Bensag

    2017-02-01

    Full Text Available Technological advances of embedded computing exposed humans to an increasing intrusion of computing in their day-to-day life (e.g. smart devices. Cooperation, autonomy, and mobility made the agent a promising mechanism for embedded devices. The work aims to present a new model of an embedded agent designed to be implemented in smart devices in order to achieve parallel tasks in a distribute environment. To validate the proposed model, a case study was developed for medical image segmentation using Cardiac Magnetic Resonance Image (MRI. In the first part of this paper, we focus on implementing the parallel algorithm of classification using C-means method in embedded systems. We propose then a new concept of distributed classification using multi-agent systems based on JADE and Raspberry PI 2 devices.

  2. Efficient Flow Control Scheme in Multimedia Cloud Computing

    Directory of Open Access Journals (Sweden)

    Jinsheng Tan

    2013-10-01

    Full Text Available As multimedia cloud computing involving a great deal of calculations about graphics, images, audio and video, which consume a lot of resources and is a key issue toward traffic control. The characteristics of traditional HTB serial determine the bottleneck of its processing speed. The author provides a kind of mechanism based on multi-core processors pipeline style and parallelization of HTB flow control, makes the improvement of the analysis and algorithm toward flow control, and finally carries out experimental testing. The results show that: compared to traditional flow control, the multi-core processors pipeline style and parallelization of HTB flow control not only has greatly improved on the processing power, but still maintained a good stability, so as to meet the multimedia cloud computing users and data scale

  3. Application of Green computing in Framing Energy Efficient Software Engineering

    Directory of Open Access Journals (Sweden)

    Aritra Mitra, Riya Basu, Avik Guha, Shalabh Agarwal, Asoke Nath

    2013-03-01

    Full Text Available Green computing and energy saving is now a veryimportant issue in Computer science andinformation technology. Dueto tremendousgrowth in information technology now the bigchallenge is how to minimize the power usage andhow to reduce the carbon foot print. Greencomputing is now a prime research area where thepeople are trying to minimize the carbon footprintand minimum usage of energy. To minimize theusage of energy there are two independentapproaches one is designing suitable hardware andthe second one is to redesign the softwaremethodology. In the present paper the authorshave tried to explore the software methodologiesand designs that can be used today to save energy.The authors have also tried to extend mobileplatform battery time as well as the various toolsthat support the development of energy-efficientsoftware.

  4. Computationally efficient perturbative forward modeling for 3D multispectral bioluminescence and fluorescence tomography

    Science.gov (United States)

    Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Chaudhari, Abhijit J.; Cherry, Simon R.; Leahy, Richard M.

    2008-03-01

    The forward problem of optical bioluminescence and fluorescence tomography seeks to determine, for a given 3D source distribution, the photon density on the surface of an animal. Photon transport through tissues is commonly modeled by the diffusion equation. The challenge, then, is to accurately and efficiently solve the diffusion equation for a realistic animal geometry and heterogeneous tissue types. Fast analytical solvers are available that can be applied to arbitrary geometries but assume homogeneity of tissue optical properties and hence have limited accuracy. The finite element method (FEM) with volume tessellation allows reasonably accurate modeling of both animal geometry and tissue heterogeneity, but this approach is computationally intensive. The computational challenge is heightened when one is working with multispectral data to improve source localization and conditioning of the inverse problem. Here we present a fast forward model based on the Born approximation that falls in between these two approaches. Our model introduces tissue heterogeneity as perturbations in diffusion and absorption coefficients at rectangular grid points inside a mouse atlas. These reflect as a correction term added to the homogeneous forward model. We have tested our model by performing source localization studies first with a biolumnescence simulation setup and then with an experimental setup using a fluorescent source embedded in an inhomogeneous phantom that mimicks tissue optical properties.

  5. Computer Forensics for Graduate Accountants: A Motivational Curriculum Design Approach

    Directory of Open Access Journals (Sweden)

    Grover Kearns

    2010-06-01

    Full Text Available Computer forensics involves the investigation of digital sources to acquire evidence that can be used in a court of law. It can also be used to identify and respond to threats to hosts and systems. Accountants use computer forensics to investigate computer crime or misuse, theft of trade secrets, theft of or destruction of intellectual property, and fraud. Education of accountants to use forensic tools is a goal of the AICPA (American Institute of Certified Public Accountants. Accounting students, however, may not view information technology as vital to their career paths and need motivation to acquire forensic knowledge and skills. This paper presents a curriculum design methodology for teaching graduate accounting students computer forensics. The methodology is tested using perceptions of the students about the success of the methodology and their acquisition of forensics knowledge and skills. An important component of the pedagogical approach is the use of an annotated list of over 50 forensic web-based tools.

  6. A GPU-Computing Approach to Solar Stokes Profile Inversion

    CERN Document Server

    Harker, Brian J

    2012-01-01

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS (GENEtic Stokes Inversion Strategy), employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units GPUs, along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disc maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel genetic algorithm with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disc vector ma...

  7. Cloud Computing – A Unified Approach for Surveillance Issues

    Science.gov (United States)

    Rachana, C. R.; Banu, Reshma, Dr.; Ahammed, G. F. Ali, Dr.; Parameshachari, B. D., Dr.

    2017-08-01

    Cloud computing describes highly scalable resources provided as an external service via the Internet on a basis of pay-per-use. From the economic point of view, the main attractiveness of cloud computing is that users only use what they need, and only pay for what they actually use. Resources are available for access from the cloud at any time, and from any location through networks. Cloud computing is gradually replacing the traditional Information Technology Infrastructure. Securing data is one of the leading concerns and biggest issue for cloud computing. Privacy of information is always a crucial pointespecially when an individual’s personalinformation or sensitive information is beingstored in the organization. It is indeed true that today; cloud authorization systems are notrobust enough. This paper presents a unified approach for analyzing the various security issues and techniques to overcome the challenges in the cloud environment.

  8. Efficient computations of wave loads on offshore structures

    DEFF Research Database (Denmark)

    Paulsen, Bo Terp

    The present thesis considers numerical computations of fully nonlinear wave impacts on bottom mounted surface piercing circular cylinders at intermediate water depths. The aim of the thesis is to provide new knowledge regarding wave loads on foundations for offshore wind turbines. Hence, the dime......The present thesis considers numerical computations of fully nonlinear wave impacts on bottom mounted surface piercing circular cylinders at intermediate water depths. The aim of the thesis is to provide new knowledge regarding wave loads on foundations for offshore wind turbines. Hence...... is carefully validated against experimental measurements of regular-, irregular- and multi-directional irregular waves. The ability of the numerical model to accurately reproduce experiments is also investigated. Wave impacts on a bottom mounted circular cylinder from steep regular waves are presented. Here......, the inline forces and the motion of the free surface is described as a function of the non-dimensional wave steepness, the relative water depth, the relative cylinder diameter and a co-existing current. From the computations, higher harmonic forces are determined and compared against the Morison equation...

  9. Computational intelligence approaches for pattern discovery in biological systems.

    Science.gov (United States)

    Fogel, Gary B

    2008-07-01

    Biology, chemistry and medicine are faced by tremendous challenges caused by an overwhelming amount of data and the need for rapid interpretation. Computational intelligence (CI) approaches such as artificial neural networks, fuzzy systems and evolutionary computation are being used with increasing frequency to contend with this problem, in light of noise, non-linearity and temporal dynamics in the data. Such methods can be used to develop robust models of processes either on their own or in combination with standard statistical approaches. This is especially true for database mining, where modeling is a key component of scientific understanding. This review provides an introduction to current CI methods, their application to biological problems, and concludes with a commentary about the anticipated impact of these approaches in bioinformatics.

  10. Efficiency and comfort of knee braces: A parametric study based on computational modelling

    CERN Document Server

    Pierrat, Baptiste; Calmels, Paul; Navarro, Laurent; Avril, Stéphane

    2014-01-01

    Knee orthotic devices are widely proposed by physicians and medical practitioners for preventive or therapeutic objectives in relation with their effects, usually known as to stabilize joint or restrict ranges of motion. This study focuses on the understanding of force transfer mechanisms from the brace to the joint thanks to a Finite Element Model. A Design Of Experiments approach was used to characterize the stiffness and comfort of various braces in order to identify their mechanically influent characteristics. Results show conflicting behavior: influent parameters such as the brace size or textile stiffness improve performance in detriment of comfort. Thanks to this computational tool, novel brace designs can be tested and evaluated for an optimal mechanical efficiency of the devices and a better compliance of the patient to the treatment.

  11. An efficient approach to imaging underground hydraulic networks

    Science.gov (United States)

    Kumar, Mohi

    2012-07-01

    To better locate natural resources, treat pollution, and monitor underground networks associated with geothermal plants, nuclear waste repositories, and carbon dioxide sequestration sites, scientists need to be able to accurately characterize and image fluid seepage pathways below ground. With these images, scientists can gain knowledge of soil moisture content, the porosity of geologic formations, concentrations and locations of dissolved pollutants, and the locations of oil fields or buried liquid contaminants. Creating images of the unknown hydraulic environments underfoot is a difficult task that has typically relied on broad extrapolations from characteristics and tests of rock units penetrated by sparsely positioned boreholes. Such methods, however, cannot identify small-scale features and are very expensive to reproduce over a broad area. Further, the techniques through which information is extrapolated rely on clunky and mathematically complex statistical approaches requiring large amounts of computational power.

  12. An efficient LDA+U based tight binding approach.

    Science.gov (United States)

    Sanna, Simone; Hourahine, B; Gallauner, Th; Frauenheim, Th

    2007-07-01

    The functionals usually applied in DFT calculations have deficiencies in describing systems with strongly localized electrons such as transition metals or rare earth (RE) compounds. In this work, we present the self-consistent charge density based functional tight binding (SCC-DFTB) calculation scheme including LDA+U like potentials and apply it for the simulation of RE-doped GaN. DFTB parameters for the simulation of GaN and a selection of rare earth ions, where the f electrons were explicitly included in the valence, have been created. The results of the simulations were tested against experimental data (where present) and against various more sophisticated but computationally more costly DFT calculations. Our approach is found to correctly reproduce the geometry and the energetic of the studied systems.

  13. Many-core technologies: The move to energy-efficient, high-throughput x86 computing (TFLOPS on a chip)

    CERN Document Server

    CERN. Geneva

    2012-01-01

    With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms at all levels of integration and programming to achieve higher performance and energy efficiency. Especially in the area of High-Performance Computing (HPC) users can entertain a combination of different hardware and software parallel architectures and programming environments. Those technologies range from vectorization and SIMD computation over shared memory multi-threading (e.g. OpenMP) to distributed memory message passing (e.g. MPI) on cluster systems. We will discuss HPC industry trends and Intel's approach to it from processor/system architectures and research activities to hardware and software tools technologies. This includes the recently announced new Intel(r) Many Integrated Core (MIC) architecture for highly-parallel workloads and general purpose, energy efficient TFLOPS performance, some of its architectural features and its programming environment. At the end we will have a br...

  14. Automatic Generation of Very Efficient Programs by Generalized Partial Computation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Generalized Partial Computation (GPC) is a program transformationmethod utilizi ng partial information about input data, properties of auxiliary functions and t he logical structure of a source program. GPC uses both an inference engine such as a theorem prover and a classical partial evaluator to optimize programs. The refore, GPC is more powerful than classical partial evaluators but harder to imp lement and control. We have implemented an experimental GPC system called WSDFU (Waseda Simplify-Distribute-Fold-Unfold). This paper discusses the power of t he program transformation system, its theorem prover and future works.

  15. Work-Efficient Parallel Skyline Computation for the GPU

    DEFF Research Database (Denmark)

    Bøgh, Kenneth Sejdenfaden; Chester, Sean; Assent, Ira

    2015-01-01

    offers the potential for parallelizing skyline computation across thousands of cores. However, attempts to port skyline algorithms to the GPU have prioritized throughput and failed to outperform sequential algorithms. In this paper, we introduce a new skyline algorithm, designed for the GPU, that uses...... a global, static partitioning scheme. With the partitioning, we can permit controlled branching to exploit transitive relationships and avoid most point-to-point comparisons. The result is a non-traditional GPU algorithm, SkyAlign, that prioritizes work-effciency and respectable throughput, rather than...

  16. Techniques for Efficiently Ensuring Data Storage Security in Cloud Computing

    DEFF Research Database (Denmark)

    Banoth, Rajkumar

    2011-01-01

    The Cloud Computing is the next generation architecture of IT Enterprise. It moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. Here, focus is on cloud data storage security, an important aspect...... of quality of service. To ensure the correctness of users’ data in the cloud, we propose an effective and flexible distributed scheme with two salient features. By utilizing the homomorphic token with distributed verification of erasure-coded data, the scheme achieves the integration of storage correctness...

  17. Energy-efficient high performance computing measurement and tuning

    CERN Document Server

    III, James H Laros; Kelly, Sue

    2012-01-01

    In this work, the unique power measurement capabilities of the Cray XT architecture were exploited to gain an understanding of power and energy use, and the effects of tuning both CPU and network bandwidth. Modifications were made to deterministically halt cores when idle. Additionally, capabilities were added to alter operating P-state. At the application level, an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale is gained by simultaneously collecting current and voltage measurements on the hosting nod

  18. Neuromolecular computing: a new approach to human brain evolution.

    Science.gov (United States)

    Wallace, R; Price, H

    1999-09-01

    Evolutionary approaches in human cognitive neurobiology traditionally emphasize macroscopic structures. It may soon be possible to supplement these studies with models of human information-processing of the molecular level. Thin-film, simulation, fluorescence microscopy, and high-resolution X-ray crystallographic studies provide evidence for transiently organized neural membrane molecular systems with possible computational properties. This review article examines evidence for hydrophobic-mismatch molecular interactions within phospholipid microdomains of a neural membrane bilayer. It is proposed that these interactions are a massively parallel algorithm which can rapidly compute near-optimal solutions to complex cognitive and physiological problems. Coupling of microdomain activity to permenant ion movements at ligand-gated and voltage-gated channels permits the conversion of molecular computations into neuron frequency codes. Evidence for microdomain transport of proteins to specific locations within the bilayer suggests that neuromolecular computation may be under some genetic control and thus modifiable by natural selection. A possible experimental approach for examining evolutionary changes in neuromolecular computation is briefly discussed.

  19. Automatic and efficient driving strategies while approaching a traffic light

    CERN Document Server

    Treiber, Martin

    2014-01-01

    Vehicle-infrastructure communication opens up new ways to improve traffic flow efficiency at signalized intersections. In this study, we assume that equipped vehicles can obtain information about switching times of relevant traffic lights in advance. This information is used to improve traffic flow by the strategies 'early braking', 'anticipative start', and 'flying start'. The strategies can be implemented in driver-information mode, or in automatic mode by an Adaptive Cruise Controller (ACC). Quality criteria include cycle-averaged capacity, driving comfort, fuel consumption, travel time, and the number of stops. By means of simulation, we investigate the isolated strategies and the complex interactions between the strategies and between equipped and non-equipped vehicles. As universal approach to assess equipment level effects we propose relative performance indexes and found, at a maximum speed of 50 km/h, improvements of about 15% for the number of stops and about 4% for the other criteria. All figures d...

  20. An Efficient Fuzzy Clustering-Based Approach for Intrusion Detection

    CERN Document Server

    Nguyen, Huu Hoa; Darmont, Jérôme

    2011-01-01

    The need to increase accuracy in detecting sophisticated cyber attacks poses a great challenge not only to the research community but also to corporations. So far, many approaches have been proposed to cope with this threat. Among them, data mining has brought on remarkable contributions to the intrusion detection problem. However, the generalization ability of data mining-based methods remains limited, and hence detecting sophisticated attacks remains a tough task. In this thread, we present a novel method based on both clustering and classification for developing an efficient intrusion detection system (IDS). The key idea is to take useful information exploited from fuzzy clustering into account for the process of building an IDS. To this aim, we first present cornerstones to construct additional cluster features for a training set. Then, we come up with an algorithm to generate an IDS based on such cluster features and the original input features. Finally, we experimentally prove that our method outperform...

  1. Computationally Efficient Nonlinearity Compensation for Coherent Fiber-Optic Systems

    Institute of Scientific and Technical Information of China (English)

    Likai Zhu; Guifang Li

    2012-01-01

    Split-step digital backward propagation (DBP) can be combined with coherent detection to compensate for fiber nonlinear impairments. A large number of DBP steps is usually needed for a long-haul fiber system, and this creates a heavy computational load. In a trade-off between complexity and performance, interchannel nonlinearity can be disregarded in order to simplify the DBP algorithm. The number of steps can also be reduced at the expense of performance. In periodic dispersion-managed long-haul transmission systems, optical waveform distortion is dominated by chromatic dispersion. As a result, the nonlinearity of the optical signal repeats in every dispersion period. Because of this periodic behavior, DBP of many fiber spans can be folded into one span. Using this distance-folded DBP method, the required computation for a transoceanic transmission system with full inline dispersion compensation can be reduced by up to two orders of magnitude with negligible penalty. The folded DBP method can be modified to compensate for nonlinearity in fiber links with non-zero residua dispersion per span.

  2. Efficient non-hydrostatic modelling of 3D wave-induced currents using a subgrid approach

    Science.gov (United States)

    Rijnsdorp, Dirk P.; Smit, Pieter B.; Zijlema, Marcel; Reniers, Ad J. H. M.

    2017-08-01

    Wave-induced currents are an ubiquitous feature in coastal waters that can spread material over the surf zone and the inner shelf. These currents are typically under resolved in non-hydrostatic wave-flow models due to computational constraints. Specifically, the low vertical resolutions adequate to describe the wave dynamics - and required to feasibly compute at the scales of a field site - are too coarse to account for the relevant details of the three-dimensional (3D) flow field. To describe the relevant dynamics of both wave and currents, while retaining a model framework that can be applied at field scales, we propose a two grid approach to solve the governing equations. With this approach, the vertical accelerations and non-hydrostatic pressures are resolved on a relatively coarse vertical grid (which is sufficient to accurately resolve the wave dynamics), whereas the horizontal velocities and turbulent stresses are resolved on a much finer subgrid (of which the resolution is dictated by the vertical scale of the mean flows). This approach ensures that the discrete pressure Poisson equation - the solution of which dominates the computational effort - is evaluated on the coarse grid scale, thereby greatly improving efficiency, while providing a fine vertical resolution to resolve the vertical variation of the mean flow. This work presents the general methodology, and discusses the numerical implementation in the SWASH wave-flow model. Model predictions are compared with observations of three flume experiments to demonstrate that the subgrid approach captures both the nearshore evolution of the waves, and the wave-induced flows like the undertow profile and longshore current. The accuracy of the subgrid predictions is comparable to fully resolved 3D simulations - but at much reduced computational costs. The findings of this work thereby demonstrate that the subgrid approach has the potential to make 3D non-hydrostatic simulations feasible at the scale of a

  3. One approach for evaluating the Distributed Computing Design System (DCDS)

    Science.gov (United States)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  4. The DYNAMO Simulation Language--An Alternate Approach to Computer Science Education.

    Science.gov (United States)

    Bronson, Richard

    1986-01-01

    Suggests the use of computer simulation of continuous systems as a problem solving approach to computer languages. Outlines the procedures that the system dynamics approach employs in computer simulations. Explains the advantages of the special purpose language, DYNAMO. (ML)

  5. Consistent analytic approach to the efficiency of collisional Penrose process

    CERN Document Server

    Harada, Tomohiro; Miyamoto, Umpei

    2016-01-01

    We propose a consistent analytic approach to the efficiency of collisional Penrose process in the vicinity of a maximally rotating Kerr black hole. We focus on a collision with arbitrarily high centre-of-mass energy, which occurs if either of the colliding particles has its angular momentum finetuned to the critical value to enter the horizon. We show that if the finetuned particle is ingoing on the collision, the upper limit of the efficiency is $(2+\\sqrt{3})(2-\\sqrt{2})\\simeq 2.186$, while if the finetuned particle is bounced back before the collision, the upper limit is $(2+\\sqrt{3})^{2}\\simeq 13.93$. Despite earlier claims, the former can be attained for inverse Compton scattering if the finetuned particle is massive and starts at rest at infinity, while the latter for various particle reactions, such as inverse Compton scattering and pair annihilation, if the finetuned particle is either massless or highly relativistic at infinity. We discuss difference between the present and earlier analyses.

  6. Consistent analytic approach to the efficiency of collisional Penrose process

    Science.gov (United States)

    Harada, Tomohiro; Ogasawara, Kota; Miyamoto, Umpei

    2016-07-01

    We propose a consistent analytic approach to the efficiency of collisional Penrose process in the vicinity of a maximally rotating Kerr black hole. We focus on a collision with arbitrarily high center-of-mass energy, which occurs if either of the colliding particles has its angular momentum fine-tuned to the critical value to enter the horizon. We show that if the fine-tuned particle is ingoing on the collision, the upper limit of the efficiency is (2 +√{3 })(2 -√{2 })≃2.186 , while if the fine-tuned particle is bounced back before the collision, the upper limit is (2 +√{3 })2≃13.93 . Despite earlier claims, the former can be attained for inverse Compton scattering if the fine-tuned particle is massive and starts at rest at infinity, while the latter can be attained for various particle reactions, such as inverse Compton scattering and pair annihilation, if the fine-tuned particle is either massless or highly relativistic at infinity. We discuss the difference between the present and earlier analyses.

  7. Anytime Prediction: Efficient Ensemble Methods for Any Computational Budget

    Science.gov (United States)

    2014-01-21

    Streeter and Golovin , 2008, Das and Kempe, 2011] typically used in the submodular optimization and sparse approximation domains. We will use a cost...Krause and Golovin [2012]. Most relevant to our work are the approaches for the budgeted or knapsack constrained sub- modular maximization problem. In... Golovin [2008], which gives an approximation guarantee for certain budgets dependent on the problem. Finally, our work will build off of previous work

  8. An Efficient Approach of Processing Multiple Continuous Queries

    Institute of Scientific and Technical Information of China (English)

    Wen Liu; Yan-Ming Shen; Peng Wang

    2016-01-01

    As stream data is being more frequently collected and analyzed, stream processing systems are faced with more design challenges. One challenge is to perform continuous window aggregation, which involves intensive computation. When there are a large number of aggregation queries, the system may suffer from scalability problems. The queries are usually similar and only differ in window specifications. In this paper, we propose collaborative aggregation which promotes aggregate sharing among the windows so that repeated aggregate operations can be avoided. Different from the previous approaches in which the aggregate sharing is restricted by the window pace, we generalize the aggregation over multiple values as a series of reductions. Therefore, the results generated by each reduction step can be shared. The sharing process is formalized in the feed semantics and we present the compose-and-declare framework to determine the data sharing logic at a very low cost. Experimental results show that our approach offers an order of magnitude performance improvement to the state-of-the-art results and has a small memory footprint.

  9. Efficient relaxed-Jacobi smoothers for multigrid on parallel computers

    Science.gov (United States)

    Yang, Xiang; Mittal, Rajat

    2017-03-01

    In this Technical Note, we present a family of Jacobi-based multigrid smoothers suitable for the solution of discretized elliptic equations. These smoothers are based on the idea of scheduled-relaxation Jacobi proposed recently by Yang & Mittal (2014) [18] and employ two or three successive relaxed Jacobi iterations with relaxation factors derived so as to maximize the smoothing property of these iterations. The performance of these new smoothers measured in terms of convergence acceleration and computational workload, is assessed for multi-domain implementations typical of parallelized solvers, and compared to the lexicographic point Gauss-Seidel smoother. The tests include the geometric multigrid method on structured grids as well as the algebraic grid method on unstructured grids. The tests demonstrate that unlike Gauss-Seidel, the convergence of these Jacobi-based smoothers is unaffected by domain decomposition, and furthermore, they outperform the lexicographic Gauss-Seidel by factors that increase with domain partition count.

  10. Communication efficient basic linear algebra computations on hypercube architectures

    Energy Technology Data Exchange (ETDEWEB)

    Johnsson, S.L.

    1987-04-01

    This paper presents a few algorithms for embedding loops and multidimensional arrays in hypercubes with emphasis on proximity preserving embeddings. A proximity preserving embedding minimizes the need for communication bandwidth in computations requiring nearest neighbor communication. Two storage schemes for ''large'' problems on ''small'' machines are suggested and analyzed, and algorithms for matrix transpose, multiplying matrices, factoring matrices, and solving triangular linear systems are presented. A few complete binary tree embeddings are described and analyzed. The data movement in the matrix algorithms is analyzed and it is shown that in the majority of cases the directed routing paths intersect only at nodes of the hypercube allowing for a maximum degree of pipelining.

  11. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  12. Performance Comparison of Hybrid Signed Digit Arithmetic in Efficient Computing

    Directory of Open Access Journals (Sweden)

    VISHAL AWASTHI

    2011-10-01

    Full Text Available In redundant representations, addition can be carried out in a constant time independent of the word length of the operands. Adder forms a fundamental building block in almost majority of VLSI designs. A hybrid adder can add an unsigned number to a signed-digit number and hence their efficient performance greatly determinesthe quality of the final output of the concerned circuit. In this paper we designed and compared the speed of adders by reducing the carry propagation time with the help of combined effect of improved architectures of adders and signed digit representation of number systems. The key idea is to draw out a compromise between execution time of fast adding process and area available which is often very limited. In this paper we also tried to verify the various algorithms of signed digit and hybrid signed digit adders.

  13. Efficient Multidimensional Data Redistribution for Resizable Parallel Computations

    CERN Document Server

    Sudarsan, Rajesh

    2007-01-01

    Traditional parallel schedulers running on cluster supercomputers support only static scheduling, where the number of processors allocated to an application remains fixed throughout the execution of the job. This results in under-utilization of idle system resources thereby decreasing overall system throughput. In our research, we have developed a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executing on distributed memory platforms. The resizing library in ReSHAPE includes support for releasing and acquiring processors and efficiently redistributing application state to a new set of processors. In this paper, we derive an algorithm for redistributing two-dimensional block-cyclic arrays from $P$ to $Q$ processors, organized as 2-D processor grids. The algorithm ensures a contention-free communication schedule for data redistribution if $P_r \\leq Q_r$ and $P_c \\leq Q_c$. In other cases, the algorithm implements circular row and column shifts on the communicat...

  14. The computational optimization of heat exchange efficiency in stack chimneys

    Energy Technology Data Exchange (ETDEWEB)

    Van Goch, T.A.J.

    2012-02-15

    For many industrial processes, the chimney is the final step before hot fumes, with high thermal energy content, are discharged into the atmosphere. Tapping into this energy and utilizing it for heating or cooling applications, could improve sustainability, efficiency and/or reduce operational costs. Alternatively, an unused chimney, like the monumental chimney at the Eindhoven University of Technology, could serve as an 'energy channeler' once more; it can enhance free cooling by exploiting the stack effect. This study aims to identify design parameters that influence annual heat exchange in such stack chimney applications and optimize these parameters for specific scenarios to maximize the performance. Performance is defined by annual heat exchange, system efficiency and costs. The energy required for the water pump as compared to the energy exchanged, defines the system efficiency, which is expressed in an efficiency coefficient (EC). This study is an example of applying building performance simulation (BPS) tools for decision support in the early phase of the design process. In this study, BPS tools are used to provide design guidance, performance evaluation and optimization. A general method for optimization of simulation models will be studied, and applied in two case studies with different applications (heating/cooling), namely; (1) CERES case: 'Eindhoven University of Technology monumental stack chimney equipped with a heat exchanger, rejects heat to load the cold source of the aquifer system on the campus of the university and/or provides free cooling to the CERES building'; and (2) Industrial case: 'Heat exchanger in an industrial stack chimney, which recoups heat for use in e.g. absorption cooling'. The main research question, addressing the concerns of both cases, is expressed as follows: 'what is the optimal set of design parameters so heat exchange in stack chimneys is optimized annually for the cases in which a

  15. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    Science.gov (United States)

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  16. Efficient Computing of some Vector Operations over GF(3) and GF(4)

    OpenAIRE

    Bouyukliev, Iliya; Bakoev, Valentin

    2008-01-01

    The problem of efficient computing of the affine vector operations (addition of two vectors and multiplication of a vector by a scalar over GF (q)), and also the weight of a given vector, is important for many problems in coding theory, cryptography, VLSI technology etc. In this paper we propose a new way of representing vectors over GF (3) and GF (4) and we describe an efficient performance of these affine operations. Computing weights of binary vectors is also discussed.

  17. The NumPy array: a structure for efficient numerical computation

    CERN Document Server

    Van Der Walt, Stefan; Varoquaux, Gaël

    2011-01-01

    In the Python world, NumPy arrays are the standard representation for numerical data. Here, we show how these arrays enable efficient implementation of numerical computations in a high-level language. Overall, three techniques are applied to improve performance: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts. We first present the NumPy array structure, then show how to use it for efficient computation, and finally how to share array data with other libraries.

  18. The NumPy array: a structure for efficient numerical computation

    OpenAIRE

    Van der Walt, Stefan; Colbert, S. Chris; Varoquaux, Gaël

    2011-01-01

    International audience; In the Python world, NumPy arrays are the standard representation for numerical data. Here, we show how these arrays enable efficient implementation of numerical computations in a high-level language. Overall, three techniques are applied to improve performance: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts. We first present the NumPy array structure, then show how to use it for efficient computation, and finally how to shar...

  19. Computational Approach for Multi Performances Optimization of EDM

    Directory of Open Access Journals (Sweden)

    Yusoff Yusliza

    2016-01-01

    Full Text Available This paper proposes a new computational approach employed in obtaining optimal parameters of multi performances EDM. Regression and artificial neural network (ANN are used as the modeling techniques meanwhile multi objective genetic algorithm (multiGA is used as the optimization technique. Orthogonal array L256 is implemented in the procedure of network function and network architecture selection. Experimental studies are carried out to verify the machining performances suggested by this approach. The highest MRR value obtained from OrthoANN – MPR – MultiGA is 205.619 mg/min and the lowest Ra value is 0.0223μm.

  20. A preferential design approach for energy-efficient and robust implantable neural signal processing hardware.

    Science.gov (United States)

    Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup

    2009-01-01

    For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.

  1. Efficient computation of coherent synchrotron radiation in a rectangular chamber

    Science.gov (United States)

    Warnock, Robert L.; Bizzozero, David A.

    2016-09-01

    We study coherent synchrotron radiation (CSR) in a perfectly conducting vacuum chamber of rectangular cross section, in a formalism allowing an arbitrary sequence of bends and straight sections. We apply the paraxial method in the frequency domain, with a Fourier development in the vertical coordinate but with no other mode expansions. A line charge source is handled numerically by a new method that rids the equations of singularities through a change of dependent variable. The resulting algorithm is fast compared to earlier methods, works for short bunches with complicated structure, and yields all six field components at any space-time point. As an example we compute the tangential magnetic field at the walls. From that one can make a perturbative treatment of the Poynting flux to estimate the energy deposited in resistive walls. The calculation was motivated by a design issue for LCLS-II, the question of how much wall heating from CSR occurs in the last bend of a bunch compressor and the following straight section. Working with a realistic longitudinal bunch form of r.m.s. length 10.4 μ m and a charge of 100 pC we conclude that the radiated power is quite small (28 W at a 1 MHz repetition rate), and all radiated energy is absorbed in the walls within 7 m along the straight section.

  2. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  3. Computer Mechatronics: A Radical Approach to Mechatronics Education

    OpenAIRE

    Nilsson, Martin

    2005-01-01

    This paper describes some distinguishing features of a course on mechatronics, based on computer science. We propose a teaching approach called Controlled Problem-Based Learning (CPBL). We have applied this method on three generations (2003-2005) of mainly fourth-year undergraduate students at Lund University (LTH). Although students found the course difficult, there were no dropouts, and all students attended the examination 2005.

  4. COMPTEL skymapping: a new approach using parallel computing

    OpenAIRE

    Strong, A.W.; Bloemen, H.; Diehl, R.; Hermsen, W.; Schoenfelder, V.

    1998-01-01

    Large-scale skymapping with COMPTEL using the full survey database presents challenging problems on account of the complex response and time-variable background. A new approach which attempts to address some of these problems is described, in which the information about each observation is preserved throughout the analysis. In this method, a maximum-entropy algorithm is used to determine image and background simultaneously. Because of the extreme computing requirements, the method has been im...

  5. Review: the physiological and computational approaches for atherosclerosis treatment.

    Science.gov (United States)

    Wang, Wuchen; Lee, Yugyung; Lee, Chi H

    2013-09-01

    The cardiovascular disease has long been an issue that causes severe loss in population, especially those conditions associated with arterial malfunction, being attributable to atherosclerosis and subsequent thrombotic formation. This article reviews the physiological mechanisms that underline the transition from plaque formation in atherosclerotic process to platelet aggregation and eventually thrombosis. The physiological and computational approaches, such as percutaneous coronary intervention and stent design modeling, to detect, evaluate and mitigate this malicious progression were also discussed.

  6. A spline-based approach for computing spatial impulse responses.

    Science.gov (United States)

    Ellis, Michael A; Guenther, Drake; Walker, William F

    2007-05-01

    Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.

  7. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    Science.gov (United States)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  8. Efficient computation method for two-dimensional nonlinear waves

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The theory and simulation of fully-nonlinear waves in a truncated two-dimensional wave tank in time domain are presented. A piston-type wave-maker is used to generate gravity waves into the tank field in finite water depth. A damping zone is added in front of the wave-maker which makes it become one kind of absorbing wave-maker and ensures the prescribed Neumann condition. The efficiency of nmerical tank is further enhanced by installation of a sponge layer beach (SLB) in front of downtank to absorb longer weak waves that leak through the entire wave train front. Assume potential flow, the space- periodic irrotational surface waves can be represented by mixed Euler- Lagrange particles. Solving the integral equation at each time step for new normal velocities, the instantaneous free surface is integrated following time history by use of fourth-order Runge- Kutta method. The double node technique is used to deal with geometric discontinuity at the wave- body intersections. Several precise smoothing methods have been introduced to treat surface point with high curvature. No saw-tooth like instability is observed during the total simulation.The advantage of proposed wave tank has been verified by comparing with linear theoretical solution and other nonlinear results, excellent agreement in the whole range of frequencies of interest has been obtained.

  9. Robust fault detection of linear systems using a computationally efficient set-membership method

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past...

  10. Computational Efficiency through Visual Argument: Do Graphic Organizers Communicate Relations in Text Too Effectively?

    Science.gov (United States)

    Robinson, Daniel H.; Schraw, Gregory

    1994-01-01

    Three experiments involving 138 college students investigated why one type of graphic organizer (a matrix) may communicate interconcept relations better than an outline or text. Results suggest that a matrix is more computationally efficient than either outline or text, allowing the easier computation of relationships. (SLD)

  11. Efficient approach to obtain free energy gradient using QM/MM MD simulation

    Energy Technology Data Exchange (ETDEWEB)

    Asada, Toshio; Koseki, Shiro [Department of Chemistry, Graduate School of Science, Osaka Prefecture University, 1-1 Gakuen-cho, Sakai, Osaka 599-8531 (Japan); The Research Institute for Molecular Electronic Devices (RIMED), Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai 599-8531 (Japan); Ando, Kanta [Department of Chemistry, Graduate School of Science, Osaka Prefecture University, 1-1 Gakuen-cho, Sakai, Osaka 599-8531 (Japan)

    2015-12-31

    The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means of FEG and the nudged elastic band (NEB) method.

  12. An efficient multiple particle filter based on the variational Bayesian approach

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2015-12-07

    This paper addresses the filtering problem in large-dimensional systems, in which conventional particle filters (PFs) remain computationally prohibitive owing to the large number of particles needed to obtain reasonable performances. To overcome this drawback, a class of multiple particle filters (MPFs) has been recently introduced in which the state-space is split into low-dimensional subspaces, and then a separate PF is applied to each subspace. In this paper, we adopt the variational Bayesian (VB) approach to propose a new MPF, the VBMPF. The proposed filter is computationally more efficient since the propagation of each particle requires generating one (new) particle only, while in the standard MPFs a set of (children) particles needs to be generated. In a numerical test, the proposed VBMPF behaves better than the PF and MPF.

  13. Time efficient 3-D electromagnetic modeling on massively parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Alumbaugh, D.L.; Newman, G.A.

    1995-08-01

    A numerical modeling algorithm has been developed to simulate the electromagnetic response of a three dimensional earth to a dipole source for frequencies ranging from 100 to 100MHz. The numerical problem is formulated in terms of a frequency domain--modified vector Helmholtz equation for the scattered electric fields. The resulting differential equation is approximated using a staggered finite difference grid which results in a linear system of equations for which the matrix is sparse and complex symmetric. The system of equations is solved using a preconditioned quasi-minimum-residual method. Dirichlet boundary conditions are employed at the edges of the mesh by setting the tangential electric fields equal to zero. At frequencies less than 1MHz, normal grid stretching is employed to mitigate unwanted reflections off the grid boundaries. For frequencies greater than this, absorbing boundary conditions must be employed by making the stretching parameters of the modified vector Helmholtz equation complex which introduces loss at the boundaries. To allow for faster calculation of realistic models, the original serial version of the code has been modified to run on a massively parallel architecture. This modification involves three distinct tasks; (1) mapping the finite difference stencil to a processor stencil which allows for the necessary information to be exchanged between processors that contain adjacent nodes in the model, (2) determining the most efficient method to input the model which is accomplished by dividing the input into ``global`` and ``local`` data and then reading the two sets in differently, and (3) deciding how to output the data which is an inherently nonparallel process.

  14. Improving the Eco-Efficiency of High Performance Computing Clusters Using EECluster

    Directory of Open Access Journals (Sweden)

    Alberto Cocaña-Fernández

    2016-03-01

    Full Text Available As data and supercomputing centres increase their performance to improve service quality and target more ambitious challenges every day, their carbon footprint also continues to grow, and has already reached the magnitude of the aviation industry. Also, high power consumptions are building up to a remarkable bottleneck for the expansion of these infrastructures in economic terms due to the unavailability of sufficient energy sources. A substantial part of the problem is caused by current energy consumptions of High Performance Computing (HPC clusters. To alleviate this situation, we present in this work EECluster, a tool that integrates with multiple open-source Resource Management Systems to significantly reduce the carbon footprint of clusters by improving their energy efficiency. EECluster implements a dynamic power management mechanism based on Computational Intelligence techniques by learning a set of rules through multi-criteria evolutionary algorithms. This approach enables cluster operators to find the optimal balance between a reduction in the cluster energy consumptions, service quality, and number of reconfigurations. Experimental studies using both synthetic and actual workloads from a real world cluster support the adoption of this tool to reduce the carbon footprint of HPC clusters.

  15. Computationally efficient SVM multi-class image recognition with confidence measures

    Energy Technology Data Exchange (ETDEWEB)

    Makili, Lazaro [Dpto. Informatica y Automatica - UNED, Madrid (Spain); Vega, Jesus, E-mail: jesus.vega@ciemat.es [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain); Dormido-Canto, Sebastian [Dpto. Informatica y Automatica - UNED, Madrid (Spain); Pastor, Ignacio [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain); Murari, Andrea [Associazione EURATOM-CIEMAT per la Fusione, Consorzio RFX, Padova (Italy)

    2011-10-15

    Typically, machine learning methods produce non-qualified estimates, i.e. the accuracy and reliability of the predictions are not provided. Transductive predictors are very recent classifiers able to provide, simultaneously with the prediction, a couple of values (confidence and credibility) to reflect the quality of the prediction. Usually, a drawback of the transductive techniques for huge datasets and large dimensionality is the high computational time. To overcome this issue, a more efficient classifier has been used in a multi-class image classification problem in the TJ-II stellarator database. It is based on the creation of a hash function to generate several 'one versus the rest' classifiers for every class. By using Support Vector Machines as the underlying classifier, a comparison between the pure transductive approach and the new method has been performed. In both cases, the success rates are high and the computation time with the new method is up to 0.4 times the old one.

  16. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    Science.gov (United States)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow

  17. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    Science.gov (United States)

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme.

  18. A computational language approach to modeling prose recall in schizophrenia.

    Science.gov (United States)

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.

  19. Efficient and configurable transmission protocol based on UDP in grid computing

    Institute of Scientific and Technical Information of China (English)

    Jigang WANG; Guochang GU; Chunguang MA; Weidong ZHONG

    2009-01-01

    At present,mainstream data transfer protocols are not always a good match for the diverse demands of grid computing.Considering this situation,this article proposes an efficient and configurable data transfer protocol (ECUDP) for grid computing.The ECUDP is based on the standard user datagram protocol (UDP),but with a collection of optimizations that meet the challenge of providing configurability and reliability while main-taining performance that meets the communication requirements of demanding applications.Experimental results show that the ECUDP performs efficiently in various grid computing scenarios and the performance analysis model can provide a good estimation of its performance.

  20. An efficient magic state approach to small angle rotations

    Science.gov (United States)

    Campbell, Earl T.; O'Gorman, Joe

    2016-12-01

    Standard error-correction techniques only provide a quantum memory and need extra gadgets to perform computation. Central to quantum algorithms are small angle rotations, which can be fault-tolerantly implemented given a supply of an unconventional species of magic state. We present a low-cost distillation routine for preparing these small angle magic states. Our protocol builds on the work of Duclos-Cianci and Poulin (2015 Phys. Rev. A 91 042315) by compressing their circuit. Additionally, we present a method of diluting magic states that reduces costs associated with very small angle rotations. We quantify performance by the expected number of noisy magic states consumed per rotation, and compare with other protocols. For modest-sized angles, our protocols offer a factor 24 improvement over the best-known gate synthesis protocols and a factor 2 over the Duclos-Cianci and Poulin protocol. For very small angle rotations, the dilution protocol dramatically reduces costs, giving several orders magnitude improvement over competitors. There also exists an intermediary regime of small, but not very small, angles where our approach gives a marginal improvement over gate synthesis. We discuss how different performance metrics may alter these conclusions.

  1. Solving hard computational problems efficiently: asymptotic parametric complexity 3-coloring algorithm.

    Directory of Open Access Journals (Sweden)

    José Antonio Martín H

    Full Text Available Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete. In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate or be absent (no admissible structure, however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N. Nevertheless, here it is proved that the probability of requiring a value of α>k to obtain a solution for a random graph decreases exponentially: P(α>k≤2(-(k+1, making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results.

  2. An efficient hybrid causative event-based approach for deriving the annual flood frequency distribution

    Science.gov (United States)

    Thyer, Mark; Li, Jing; Lambert, Martin; Kuczera, George; Metcalfe, Andrew

    2015-04-01

    Flood extremes are driven by highly variable and complex climatic and hydrological processes. Derived flood frequency methods are often used to predict the flood frequency distribution (FFD) because they can provide predictions in ungauged catchments and evaluate the impact of land-use or climate change. This study presents recent work on development of a new derived flood frequency method called the hybrid causative events (HCE) approach. The advantage of the HCE approach is that it combines the accuracy of the continuous simulation approach with the computational efficiency of the event-based approaches. Derived flood frequency methods, can be divided into two classes. Event-based approaches provide fast estimation, but can also lead to prediction bias due to limitations of inherent assumptions required for obtaining input information (rainfall and catchment wetness) for events that cause large floods. Continuous simulation produces more accurate predictions, however, at the cost of massive computational time. The HCE method uses a short continuous simulation to provide inputs for a rainfall-runoff model running in an event-based fashion. A proof-of-concept pilot study that the HCE produces estimates of the flood frequency distribution with similar accuracy as the continuous simulation, but with dramatically reduced computation time. Recent work incorporated seasonality into the HCE approach and evaluated with a more realistic set of eight sites from a wide range of climate zones, typical of Australia, using a virtual catchment approach. The seasonal hybrid-CE provided accurate predictions of the FFD for all sites. Comparison with the existing non-seasonal hybrid-CE showed that for some sites the non-seasonal hybrid-CE significantly over-predicted the FFD. Analysis of the underlying cause of whether a site had a high, low or no need to use seasonality found it was based on a combination of reasons, that were difficult to predict apriori. Hence it is recommended

  3. Computationally generated velocity taper for efficiency enhancement in a coupled-cavity traveling-wave tube

    Science.gov (United States)

    Wilson, Jeffrey D.

    1989-01-01

    A computational routine has been created to generate velocity tapers for efficiency enhancement in coupled-cavity TWTs. Programmed into the NASA multidimensional large-signal coupled-cavity TWT computer code, the routine generates the gradually decreasing cavity periods required to maintain a prescribed relationship between the circuit phase velocity and the electron-bunch velocity. Computational results for several computer-generated tapers are compared to those for an existing coupled-cavity TWT with a three-step taper. Guidelines are developed for prescribing the bunch-phase profile to produce a taper for efficiency. The resulting taper provides a calculated RF efficiency 45 percent higher than the step taper at center frequency and at least 37 percent higher over the bandwidth.

  4. A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems

    CERN Document Server

    Beloglazov, Anton; Lee, Young Choon; Zomaya, Albert

    2010-01-01

    Traditionally, the development of computing systems has been focused on performance improvements driven by the demand of applications from consumer, scientific and business domains. However, the ever increasing energy consumption of computing systems has started to limit further performance growth due to overwhelming electricity bills and carbon dioxide footprints. Therefore, the goal of the computer system design has been shifted to power and energy efficiency. To identify open challenges in the area and facilitate future advancements it is essential to synthesize and classify the research on power and energy-efficient design conducted to date. In this work we discuss causes and problems of high power / energy consumption, and present a taxonomy of energy-efficient design of computing systems covering the hardware, operating system, virtualization and data center levels. We survey various key works in the area and map them to our taxonomy to guide future design and development efforts. This chapter is conclu...

  5. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  6. Efficient Analysis of Pattern and Association Rule Mining Approaches

    Directory of Open Access Journals (Sweden)

    Thabet Slimani

    2014-02-01

    Full Text Available The process of data mining produces various patterns from a given data source. The most recognized data mining tasks are the process of discovering frequent itemsets, frequent sequential patterns, frequent sequential rules and frequent association rules. Numerous efficient algorithms have been proposed to do the above processes. Frequent pattern mining has been a focused topic in data mining research with a good number of references in literature and for that reason an important progress has been made, varying from performant algorithms for frequent itemset mining in transaction databases to complex algorithms, such as sequential pattern mining, structured pattern mining, correlation mining. Association Rule mining (ARM is one of the utmost current data mining techniques designed to group objects together from large databases aiming to extract the interesting correlation and relation among huge amount of data. In this article, we provide a brief review and analysis of the current status of frequent pattern mining and discuss some promising research directions. Additionally, this paper includes a comparative study between the performance of the described approaches.

  7. An approach to estimate radioadaptation from DSB repair efficiency.

    Science.gov (United States)

    Yatagai, Fumio; Sugasawa, Kaoru; Enomoto, Shuichi; Honma, Masamitsu

    2009-09-01

    In this review, we would like to introduce a unique approach for the estimation of radioadaptation. Recently, we proposed a new methodology for evaluating the repair efficiency of DNA double-strand breaks (DSB) using a model system. The model system can trace the fate of a single DSB, which is introduced within intron 4 of the TK gene on chromosome 17 in human lymphoblastoid TK6 cells by the expression of restriction enzyme I-SceI. This methodology was first applied to examine whether repair of the DSB (at the I-SceI site) can be influenced by low-dose, low-dose rate gamma-ray irradiation. We found that such low-dose IR exposure could enhance the activity of DSB repair through homologous recombination (HR). HR activity was also enhanced due to the pre-IR irradiation under the established conditions for radioadaptation (50 mGy X-ray-6 h-I-SceI treatment). Therefore, radioadaptation might account for the reduced frequency of homozygous loss of heterozygosity (LOH) events observed in our previous experiment (50 mGy X-ray-6 h-2 Gy X-ray). We suggest that the present evaluation of DSB repair using this I-SceI system, may contribute to our overall understanding of radioadaptation.

  8. Analysis of resource efficiency: a production frontier approach.

    Science.gov (United States)

    Hoang, Viet-Ngu

    2014-05-01

    This article integrates the material/energy flow analysis into a production frontier framework to quantify resource efficiency (RE). The emergy content of natural resources instead of their mass content is used to construct aggregate inputs. Using the production frontier approach, aggregate inputs will be optimised relative to given output quantities to derive RE measures. This framework is superior to existing RE indicators currently used in the literature. Using the exergy/emergy content in constructing aggregate material or energy flows overcomes a criticism that mass content cannot be used to capture different quality of differing types of resources. Derived RE measures are both 'qualitative' and 'quantitative', whereas existing RE indicators are only qualitative. An empirical examination into the RE of 116 economies was undertaken to illustrate the practical applicability of the new framework. The results showed that economies, on average, could reduce the consumption of resources by more than 30% without any reduction in per capita gross domestic product (GDP). This calculation occurred after adjustments for differences in the purchasing power of national currencies. The existence of high variations in RE across economies was found to be positively correlated with participation of people in labour force, population density, urbanisation, and GDP growth over the past five years. The results also showed that economies of a higher income group achieved higher RE, and those economies that are more dependent on imports and primary industries would have lower RE performance.

  9. A Computer Vision Approach to Identify Einstein Rings and Arcs

    Science.gov (United States)

    Lee, Chien-Hsiu

    2017-03-01

    Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.

  10. Computational neuroscience approach to biomarkers and treatments for mental disorders.

    Science.gov (United States)

    Yahata, Noriaki; Kasai, Kiyoto; Kawato, Mitsuo

    2017-04-01

    Psychiatry research has long experienced a stagnation stemming from a lack of understanding of the neurobiological underpinnings of phenomenologically defined mental disorders. Recently, the application of computational neuroscience to psychiatry research has shown great promise in establishing a link between phenomenological and pathophysiological aspects of mental disorders, thereby recasting current nosology in more biologically meaningful dimensions. In this review, we highlight recent investigations into computational neuroscience that have undertaken either theory- or data-driven approaches to quantitatively delineate the mechanisms of mental disorders. The theory-driven approach, including reinforcement learning models, plays an integrative role in this process by enabling correspondence between behavior and disorder-specific alterations at multiple levels of brain organization, ranging from molecules to cells to circuits. Previous studies have explicated a plethora of defining symptoms of mental disorders, including anhedonia, inattention, and poor executive function. The data-driven approach, on the other hand, is an emerging field in computational neuroscience seeking to identify disorder-specific features among high-dimensional big data. Remarkably, various machine-learning techniques have been applied to neuroimaging data, and the extracted disorder-specific features have been used for automatic case-control classification. For many disorders, the reported accuracies have reached 90% or more. However, we note that rigorous tests on independent cohorts are critically required to translate this research into clinical applications. Finally, we discuss the utility of the disorder-specific features found by the data-driven approach to psychiatric therapies, including neurofeedback. Such developments will allow simultaneous diagnosis and treatment of mental disorders using neuroimaging, thereby establishing 'theranostics' for the first time in clinical

  11. Thermodynamic efficiency limits of classical and bifacial multi-junction tandem solar cells: An analytical approach

    Science.gov (United States)

    Alam, Muhammad Ashraful; Khan, M. Ryyan

    2016-10-01

    Bifacial tandem cells promise to reduce three fundamental losses (i.e., above-bandgap, below bandgap, and the uncollected light between panels) inherent in classical single junction photovoltaic (PV) systems. The successive filtering of light through the bandgap cascade and the requirement of current continuity make optimization of tandem cells difficult and accessible only to numerical solution through computer modeling. The challenge is even more complicated for bifacial design. In this paper, we use an elegantly simple analytical approach to show that the essential physics of optimization is intuitively obvious, and deeply insightful results can be obtained with a few lines of algebra. This powerful approach reproduces, as special cases, all of the known results of conventional and bifacial tandem cells and highlights the asymptotic efficiency gain of these technologies.

  12. An efficient approach for shadow detection based on Gaussian mixture model

    Institute of Scientific and Technical Information of China (English)

    韩延祥; 张志胜; 陈芳; 陈恺

    2014-01-01

    An efficient approach was proposed for discriminating shadows from moving objects. In the background subtraction stage, moving objects were extracted. Then, the initial classification for moving shadow pixels and foreground object pixels was performed by using color invariant features. In the shadow model learning stage, instead of a single Gaussian distribution, it was assumed that the density function computed on the values of chromaticity difference or bright difference, can be modeled as a mixture of Gaussian consisting of two density functions. Meanwhile, the Gaussian parameter estimation was performed by using EM algorithm. The estimates were used to obtain shadow mask according to two constraints. Finally, experiments were carried out. The visual experiment results confirm the effectiveness of proposed method. Quantitative results in terms of the shadow detection rate and the shadow discrimination rate (the maximum values are 85.79%and 97.56%, respectively) show that the proposed approach achieves a satisfying result with post-processing step.

  13. SPINET: A Parallel Computing Approach to Spine Simulations

    Directory of Open Access Journals (Sweden)

    Peter G. Kropf

    1996-01-01

    Full Text Available Research in scientitic programming enables us to realize more and more complex applications, and on the other hand, application-driven demands on computing methods and power are continuously growing. Therefore, interdisciplinary approaches become more widely used. The interdisciplinary SPINET project presented in this article applies modern scientific computing tools to biomechanical simulations: parallel computing and symbolic and modern functional programming. The target application is the human spine. Simulations of the spine help us to investigate and better understand the mechanisms of back pain and spinal injury. Two approaches have been used: the first uses the finite element method for high-performance simulations of static biomechanical models, and the second generates a simulation developmenttool for experimenting with different dynamic models. A finite element program for static analysis has been parallelized for the MUSIC machine. To solve the sparse system of linear equations, a conjugate gradient solver (iterative method and a frontal solver (direct method have been implemented. The preprocessor required for the frontal solver is written in the modern functional programming language SML, the solver itself in C, thus exploiting the characteristic advantages of both functional and imperative programming. The speedup analysis of both solvers show very satisfactory results for this irregular problem. A mixed symbolic-numeric environment for rigid body system simulations is presented. It automatically generates C code from a problem specification expressed by the Lagrange formalism using Maple.

  14. Solubility of nonelectrolytes: a first-principles computational approach.

    Science.gov (United States)

    Jackson, Nicholas E; Chen, Lin X; Ratner, Mark A

    2014-05-15

    Using a combination of classical molecular dynamics and symmetry adapted intermolecular perturbation theory, we develop a high-accuracy computational method for examining the solubility energetics of nonelectrolytes. This approach is used to accurately compute the cohesive energy density and Hildebrand solubility parameters of 26 molecular liquids. The energy decomposition of symmetry adapted perturbation theory is then utilized to develop multicomponent Hansen-like solubility parameters. These parameters are shown to reproduce the solvent categorizations (nonpolar, polar aprotic, or polar protic) of all molecular liquids studied while lending quantitative rigor to these qualitative categorizations via the introduction of simple, easily computable parameters. Notably, we find that by monitoring the first-order exchange energy contribution to the total interaction energy, one can rigorously determine the hydrogen bonding character of a molecular liquid. Finally, this method is applied to compute explicitly the Flory interaction parameter and the free energy of mixing for two different small molecule mixtures, reproducing the known miscibilities. This methodology represents an important step toward the prediction of molecular solubility from first principles.

  15. Do Energy Efficiency Standards Improve Quality? Evidence from a Revealed Preference Approach

    Energy Technology Data Exchange (ETDEWEB)

    Houde, Sebastien [Univ. of Maryland, College Park, MD (United States); Spurlock, C. Anna [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    Minimum energy efficiency standards have occupied a central role in U.S. energy policy for more than three decades, but little is known about their welfare effects. In this paper, we employ a revealed preference approach to quantify the impact of past revisions in energy efficiency standards on product quality. The micro-foundation of our approach is a discrete choice model that allows us to compute a price-adjusted index of vertical quality. Focusing on the appliance market, we show that several standard revisions during the period 2001-2011 have led to an increase in quality. We also show that these standards have had a modest effect on prices, and in some cases they even led to decreases in prices. For revision events where overall quality increases and prices decrease, the consumer welfare effect of tightening the standards is unambiguously positive. Finally, we show that after controlling for the effect of improvement in energy efficiency, standards have induced an expansion of quality in the non-energy dimension. We discuss how imperfect competition can rationalize these results.

  16. Efficient Computation of Power, Force, and Torque in BEM Scattering Calculations

    CERN Document Server

    Reid, M T Homer

    2013-01-01

    We present concise, computationally efficient formulas for several quantities of interest -- including absorbed and scattered power, optical force (radiation pressure), and torque -- in scattering calculations performed using the boundary-element method (BEM) [also known as the method of moments (MOM)]. Our formulas compute the quantities of interest \\textit{directly} from the BEM surface currents with no need ever to compute the scattered electromagnetic fields. We derive our new formulas and demonstrate their effectiveness by computing power, force, and torque in a number of example geometries. Free, open-source software implementations of our formulas are available for download online.

  17. A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.

    Science.gov (United States)

    Moretti, Loris; Sartori, Luca

    2016-10-01

    Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. [Computer work and De Quervain's tenosynovitis: an evidence based approach].

    Science.gov (United States)

    Gigante, M R; Martinotti, I; Cirla, P E

    2012-01-01

    The debate around the role of the work at personal computer as cause of De Quervain's Tenosynovitis was developed partially, without considering multidisciplinary available data. A systematic review of the literature, using an evidence-based approach, was performed. In disorders associated with the use of VDU, we must distinguish those at the upper limbs and among them those related to an overload. Experimental studies on the occurrence of De Quervain's Tenosynovitis are quite limited, as well as clinically are quite difficult to prove the professional etiology, considering the interference due to other activities of daily living or to the biological susceptibility (i.e. anatomical variability, sex, age, exercise). At present there is no evidence of any connection between De Quervain syndrome and time of use of the personal computer or keyboard, limited evidence of correlation is found with time using a mouse. No data are available regarding the use exclusively or predominantly for personal laptops or mobile "smart phone".

  19. Identifying Pathogenicity Islands in Bacterial Pathogenomics Using Computational Approaches

    Directory of Open Access Journals (Sweden)

    Dongsheng Che

    2014-01-01

    Full Text Available High-throughput sequencing technologies have made it possible to study bacteria through analyzing their genome sequences. For instance, comparative genome sequence analyses can reveal the phenomenon such as gene loss, gene gain, or gene exchange in a genome. By analyzing pathogenic bacterial genomes, we can discover that pathogenic genomic regions in many pathogenic bacteria are horizontally transferred from other bacteria, and these regions are also known as pathogenicity islands (PAIs. PAIs have some detectable properties, such as having different genomic signatures than the rest of the host genomes, and containing mobility genes so that they can be integrated into the host genome. In this review, we will discuss various pathogenicity island-associated features and current computational approaches for the identification of PAIs. Existing pathogenicity island databases and related computational resources will also be discussed, so that researchers may find it to be useful for the studies of bacterial evolution and pathogenicity mechanisms.

  20. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  1. Computational approaches for rational design of proteins with novel functionalities

    Directory of Open Access Journals (Sweden)

    Manish Kumar Tiwari

    2012-09-01

    Full Text Available Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes.

  2. Computational approaches for rational design of proteins with novel functionalities.

    Science.gov (United States)

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes.

  3. Computational Approach for Studying Optical Properties of DNA Systems in Solution

    DEFF Research Database (Denmark)

    Nørby, Morten Steen; Svendsen, Casper Steinmann; Olsen, Jógvan Magnus Haugaard

    2016-01-01

    In this paper we present a study of the methodological aspects regarding calculations of optical properties for DNA systems in solution. Our computational approach will be built upon a fully polarizable QM/MM/Continuum model within a damped linear response theory framework. In this approach...... the environment is given a highly advanced description in terms of the electrostatic potential through the polarizable embedding model. Furthermore, bulk solvent effects are included in an efficient manner through a conductor-like screening model. With the aim of reducing the computational cost we develop a set...... of averaged partial charges and distributed isotropic dipole-dipole polarizabilities for DNA suitable for describing the classical region in ground-state and excited-state calculations. Calculations of the UV-spectrum of the 2-aminopurine optical probe embedded in a DNA double helical structure are presented...

  4. Integrating structure-based and ligand-based approaches for computational drug design.

    Science.gov (United States)

    Wilson, Gregory L; Lill, Markus A

    2011-04-01

    Methods utilized in computer-aided drug design can be classified into two major categories: structure based and ligand based, using information on the structure of the protein or on the biological and physicochemical properties of bound ligands, respectively. In recent years there has been a trend towards integrating these two methods in order to enhance the reliability and efficiency of computer-aided drug-design approaches by combining information from both the ligand and the protein. This trend resulted in a variety of methods that include: pseudoreceptor methods, pharmacophore methods, fingerprint methods and approaches integrating docking with similarity-based methods. In this article, we will describe the concepts behind each method and selected applications.

  5. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Liang Jinghang

    2012-08-01

    Full Text Available Abstract Background Various computational models have been of interest due to their use in the modelling of gene regulatory networks (GRNs. As a logical model, probabilistic Boolean networks (PBNs consider molecular and genetic noise, so the study of PBNs provides significant insights into the understanding of the dynamics of GRNs. This will ultimately lead to advances in developing therapeutic methods that intervene in the process of disease development and progression. The applications of PBNs, however, are hindered by the complexities involved in the computation of the state transition matrix and the steady-state distribution of a PBN. For a PBN with n genes and N Boolean networks, the complexity to compute the state transition matrix is O(nN22n or O(nN2n for a sparse matrix. Results This paper presents a novel implementation of PBNs based on the notions of stochastic logic and stochastic computation. This stochastic implementation of a PBN is referred to as a stochastic Boolean network (SBN. An SBN provides an accurate and efficient simulation of a PBN without and with random gene perturbation. The state transition matrix is computed in an SBN with a complexity of O(nL2n, where L is a factor related to the stochastic sequence length. Since the minimum sequence length required for obtaining an evaluation accuracy approximately increases in a polynomial order with the number of genes, n, and the number of Boolean networks, N, usually increases exponentially with n, L is typically smaller than N, especially in a network with a large number of genes. Hence, the computational efficiency of an SBN is primarily limited by the number of genes, but not directly by the total possible number of Boolean networks. Furthermore, a time-frame expanded SBN enables an efficient analysis of the steady-state distribution of a PBN. These findings are supported by the simulation results of a simplified p53 network, several randomly generated networks and a

  6. A Flexible and Non-instrusive Approach for Computing Complex Structural Coverage Metrics

    Science.gov (United States)

    Whalen, Michael W.; Person, Suzette J.; Rungta, Neha; Staats, Matt; Grijincu, Daniela

    2015-01-01

    Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test. In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our nonintrusive approach for computing code coverage information

  7. Efficient computation of turbulent flow in ribbed passages using a non-overlapping near-wall domain decomposition method

    Science.gov (United States)

    Jones, Adam; Utyuzhnikov, Sergey

    2017-08-01

    Turbulent flow in a ribbed channel is studied using an efficient near-wall domain decomposition (NDD) method. The NDD approach is formulated by splitting the computational domain into an inner and outer region, with an interface boundary between the two. The computational mesh covers the outer region, and the flow in this region is solved using the open-source CFD code Code_Saturne with special boundary conditions on the interface boundary, called interface boundary conditions (IBCs). The IBCs are of Robin type and incorporate the effect of the inner region on the flow in the outer region. IBCs are formulated in terms of the distance from the interface boundary to the wall in the inner region. It is demonstrated that up to 90% of the region between the ribs in the ribbed passage can be removed from the computational mesh with an error on the friction factor within 2.5%. In addition, computations with NDD are faster than computations based on low Reynolds number (LRN) models by a factor of five. Different rib heights can be studied with the same mesh in the outer region without affecting the accuracy of the friction factor. This is tested with six different rib heights in an example of a design optimisation study. It is found that the friction factors computed with NDD are almost identical to the fully-resolved results. When used for inverse problems, NDD is considerably more efficient than LRN computations because only one computation needs to be performed and only one mesh needs to be generated.

  8. The Recursive Thick Frontier Approach to Estimating Efficiency

    OpenAIRE

    Wagenvoort, Rien; Schure, Paul

    1999-01-01

    The traditional econometric techniques for frontier models, namely the Stochastic Frontier Approach (SFA), the Thick Frontier Approach (TFA) and the Distribution Free Approach (DFA) have in common that they depend on a priori assumptions that are, whether feasible or not, difficult to test. This paper introduces the Recursive Thick Frontier Approach (RTFA) to the estimation of technology parameters when panel data is available. Our approach is based on the assertion that if deviations from th...

  9. Computational systems biology approaches to anti-angiogenic cancer therapeutics.

    Science.gov (United States)

    Finley, Stacey D; Chu, Liang-Hui; Popel, Aleksander S

    2015-02-01

    Angiogenesis is an exquisitely regulated process that is required for physiological processes and is also important in numerous diseases. Tumors utilize angiogenesis to generate the vascular network needed to supply the cancer cells with nutrients and oxygen, and many cancer drugs aim to inhibit tumor angiogenesis. Anti-angiogenic therapy involves inhibiting multiple cell types, molecular targets, and intracellular signaling pathways. Computational tools are useful in guiding treatment strategies, predicting the response to treatment, and identifying new targets of interest. Here, we describe progress that has been made in applying mathematical modeling and bioinformatics approaches to study anti-angiogenic therapeutics in cancer.

  10. Approaches to Computer Modeling of Phosphate Hide-Out.

    Science.gov (United States)

    1984-06-28

    phosphate acts as a buffer to keep pH at a value above which acid corrosion occurs . and below which caustic corrosion becomes significant. Difficulties are...ionization of dihydrogen phosphate : HIPO - + + 1PO, K (B-7) H+ + - £Iao 1/1, (B-8) H , PO4 - + O- - H0 4 + H20 K/Kw (0-9) 19 * Such zero heat...OF STANDARDS-1963-A +. .0 0 0 9t~ - 4 NRL Memorandum Report 5361 4 Approaches to Computer Modeling of Phosphate Hide-Out K. A. S. HARDY AND J. C

  11. Hospital efficiency and transaction costs: a stochastic frontier approach.

    Science.gov (United States)

    Ludwig, Martijn; Groot, Wim; Van Merode, Frits

    2009-07-01

    The make-or-buy decision of organizations is an important issue in the transaction cost theory, but is usually not analyzed from an efficiency perspective. Hospitals frequently have to decide whether to outsource or not. The main question we address is: Is the make-or-buy decision affected by the efficiency of hospitals? A one-stage stochastic cost frontier equation is estimated for Dutch hospitals. The make-or-buy decisions of ten different hospital services are used as explanatory variables to explain efficiency of hospitals. It is found that for most services the make-or-buy decision is not related to efficiency. Kitchen services are an important exception to this. Large hospitals tend to outsource less, which is supported by efficiency reasons. For most hospital services, outsourcing does not significantly affect the efficiency of hospitals. The focus on the make-or-buy decision may therefore be less important than often assumed.

  12. A Dynamic Bayesian Approach to Computational Laban Shape Quality Analysis

    Directory of Open Access Journals (Sweden)

    Dilip Swaminathan

    2009-01-01

    kinesiology. LMA (especially Effort/Shape emphasizes how internal feelings and intentions govern the patterning of movement throughout the whole body. As we argue, a complex understanding of intention via LMA is necessary for human-computer interaction to become embodied in ways that resemble interaction in the physical world. We thus introduce a novel, flexible Bayesian fusion approach for identifying LMA Shape qualities from raw motion capture data in real time. The method uses a dynamic Bayesian network (DBN to fuse movement features across the body and across time and as we discuss can be readily adapted for low-cost video. It has delivered excellent performance in preliminary studies comprising improvisatory movements. Our approach has been incorporated in Response, a mixed-reality environment where users interact via natural, full-body human movement and enhance their bodily-kinesthetic awareness through immersive sound and light feedback, with applications to kinesiology training, Parkinson's patient rehabilitation, interactive dance, and many other areas.

  13. Buildings Energy Efficiency: Interventions Analysis under a Smart Cities Approach

    National Research Council Canada - National Science Library

    Gabriele Battista; Luca Evangelisti; Claudia Guattari; Carmine Basilicata; Roberto de Lieto Vollaro

    2014-01-01

    .... Smart cities can be a viable solution. The methodology traditionally adopted to evaluate building energy efficiency starts from the structure's energy demands analysis and the demands reduction evaluation...

  14. Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations

    Institute of Scientific and Technical Information of China (English)

    Junaid Ali Khan; Muhammad Asif Zahoor Raja; Ijaz Mansoor Qureshi

    2011-01-01

    @@ We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs).The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error.The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique.The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations.We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods.The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy.With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.%We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.

  15. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    Science.gov (United States)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a

  16. A Computational Differential Geometry Approach to Grid Generation

    CERN Document Server

    Liseikin, Vladimir D

    2007-01-01

    The process of breaking up a physical domain into smaller sub-domains, known as meshing, facilitates the numerical solution of partial differential equations used to simulate physical systems. This monograph gives a detailed treatment of applications of geometric methods to advanced grid technology. It focuses on and describes a comprehensive approach based on the numerical solution of inverted Beltramian and diffusion equations with respect to monitor metrics for generating both structured and unstructured grids in domains and on surfaces. In this second edition the author takes a more detailed and practice-oriented approach towards explaining how to implement the method by: Employing geometric and numerical analyses of monitor metrics as the basis for developing efficient tools for controlling grid properties. Describing new grid generation codes based on finite differences for generating both structured and unstructured surface and domain grids. Providing examples of applications of the codes to the genera...

  17. THE EFFICIENCY OF TECHNOLOGY TRANSFER – THEORETICAL AND METHODOLOGICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Andreea-Clara MUNTEANU

    2006-06-01

    Full Text Available As the importance and complexity level of technological transfer increased, the need of adequate systems of assessing the efficiency of this process became the more obvious. Introducing sustainability criteria requires the creation of a complex framework for analysing and studying efficiency that would incorporate all other three dimensions of contemporary economic development: economic, social and environmental.

  18. Measurement of dynamic efficiency: a directional distance function parametric approach

    NARCIS (Netherlands)

    Serra, T.; Oude Lansink, A.G.J.M.; Stefanou, S.E.

    2011-01-01

    This research proposes a parametric estimation of the structural dynamic efficiency measures proposed by Silva and Oude Lansink (2009). Overall, technical and allocative efficiency measurements are derived based on a directional distance function and the duality between this function and the optimal

  19. Efficiency of flow-driven adiabatic spin inversion under realistic experimental conditions: A computer simulation

    NARCIS (Netherlands)

    Trampel, R.; Jochimsen, T.H.; Mildner, T.; Norris, D.G.; Moller, H.E.

    2004-01-01

    Continuous arterial spin labeling (CASL) using adiabatic inversion is a widely used approach for perfusion imaging. For the quantification of perfusion, a reliable determination of the labeling efficiency is required. A numerical method for predicting the labeling efficiency in CASL experiments unde

  20. An Approach for Location privacy in Pervasive Computing Environment

    Directory of Open Access Journals (Sweden)

    Sudheer Kumar Singh

    2010-05-01

    Full Text Available This paper focus on location privacy in location based services, Location privacy is a particular type of information privacy that can be defined as the ability to prevent others from learning one’s current or past location. Many systems such as GPS implicitly and automatically give its users location privacy. Once user sends his or her current location to the application server, Application server stores current locations of users in application server database. User can not delete or modify his or her location data after sending once to application server. Addressing this problem, Here in this paper, we are giving theoretical concept for protecting location privacy in pervasive computing environment. This approach based on user anonymity based location privacy. Going through the basic user anonymity based a location privacy approach that uses trusted proxy. By analysis of this approach, we propose an improvement over it using dummy-locations of users and also dummies of requested services by users from the application server. In this paper, this approach reduces the user’s overheads to extracting necessary information from reply message coming from application server. In this approach, user send a message having (current location and ID+ requested service to the trusted proxy and trusted proxy generates dummies location related to current location and also generates temporary pseudonym corresponding to real ID of users. After Analysis of this approach wehave found on problem with requested service. Addressing this problem, we improve our method by using dummies of requested service generated by trusted proxy. Trusted proxy generated Dummies (false position by dummies location algorithms.

  1. An efficient numerical algorithm for computing densely distributed positive interior transmission eigenvalues

    Science.gov (United States)

    Li, Tiexiang; Huang, Tsung-Ming; Lin, Wen-Wei; Wang, Jenn-Nan

    2017-03-01

    We propose an efficient eigensolver for computing densely distributed spectra of the two-dimensional transmission eigenvalue problem (TEP), which is derived from Maxwell’s equations with Tellegen media and the transverse magnetic mode. The governing equations, when discretized by the standard piecewise linear finite element method, give rise to a large-scale quadratic eigenvalue problem (QEP). Our numerical simulation shows that half of the positive eigenvalues of the QEP are densely distributed in some interval near the origin. The quadratic Jacobi-Davidson method with a so-called non-equivalence deflation technique is proposed to compute the dense spectrum of the QEP. Extensive numerical simulations show that our proposed method processes the convergence efficiently, even when it needs to compute more than 5000 desired eigenpairs. Numerical results also illustrate that the computed eigenvalue curves can be approximated by nonlinear functions, which can be applied to estimate the denseness of the eigenvalues for the TEP.

  2. Novel computational approaches for the analysis of cosmic magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Saveliev, Andrey [Universitaet Hamburg, Hamburg (Germany); Keldysh Institut, Moskau (Russian Federation)

    2016-07-01

    In order to give a consistent picture of cosmic, i.e. galactic and extragalactic, magnetic fields, different approaches are possible and often even necessary. Here we present three of them: First, a semianalytic analysis of the time evolution of primordial magnetic fields from which their properties and, subsequently, the nature of present-day intergalactic magnetic fields may be deduced. Second, the use of high-performance computing infrastructure by developing powerful algorithms for (magneto-)hydrodynamic simulations and applying them to astrophysical problems. We are currently developing a code which applies kinetic schemes in massive parallel computing on high performance multiprocessor systems in a new way to calculate both hydro- and electrodynamic quantities. Finally, as a third approach, astroparticle physics might be used as magnetic fields leave imprints of their properties on charged particles transversing them. Here we focus on electromagnetic cascades by developing a software based on CRPropa which simulates the propagation of particles from such cascades through the intergalactic medium in three dimensions. This may in particular be used to obtain information about the helicity of extragalactic magnetic fields.

  3. Computational approaches to understand cardiac electrophysiology and arrhythmias

    Science.gov (United States)

    Roberts, Byron N.; Yang, Pei-Chi; Behrens, Steven B.; Moreno, Jonathan D.

    2012-01-01

    Cardiac rhythms arise from electrical activity generated by precisely timed opening and closing of ion channels in individual cardiac myocytes. These impulses spread throughout the cardiac muscle to manifest as electrical waves in the whole heart. Regularity of electrical waves is critically important since they signal the heart muscle to contract, driving the primary function of the heart to act as a pump and deliver blood to the brain and vital organs. When electrical activity goes awry during a cardiac arrhythmia, the pump does not function, the brain does not receive oxygenated blood, and death ensues. For more than 50 years, mathematically based models of cardiac electrical activity have been used to improve understanding of basic mechanisms of normal and abnormal cardiac electrical function. Computer-based modeling approaches to understand cardiac activity are uniquely helpful because they allow for distillation of complex emergent behaviors into the key contributing components underlying them. Here we review the latest advances and novel concepts in the field as they relate to understanding the complex interplay between electrical, mechanical, structural, and genetic mechanisms during arrhythmia development at the level of ion channels, cells, and tissues. We also discuss the latest computational approaches to guiding arrhythmia therapy. PMID:22886409

  4. Computational Approach to Dendritic Spine Taxonomy and Shape Transition Analysis

    Science.gov (United States)

    Bokota, Grzegorz; Magnowska, Marta; Kuśmierczyk, Tomasz; Łukasik, Michał; Roszkowska, Matylda; Plewczynski, Dariusz

    2016-01-01

    The common approach in morphological analysis of dendritic spines of mammalian neuronal cells is to categorize spines into subpopulations based on whether they are stubby, mushroom, thin, or filopodia shaped. The corresponding cellular models of synaptic plasticity, long-term potentiation, and long-term depression associate the synaptic strength with either spine enlargement or spine shrinkage. Although a variety of automatic spine segmentation and feature extraction methods were developed recently, no approaches allowing for an automatic and unbiased distinction between dendritic spine subpopulations and detailed computational models of spine behavior exist. We propose an automatic and statistically based method for the unsupervised construction of spine shape taxonomy based on arbitrary features. The taxonomy is then utilized in the newly introduced computational model of behavior, which relies on transitions between shapes. Models of different populations are compared using supplied bootstrap-based statistical tests. We compared two populations of spines at two time points. The first population was stimulated with long-term potentiation, and the other in the resting state was used as a control. The comparison of shape transition characteristics allowed us to identify the differences between population behaviors. Although some extreme changes were observed in the stimulated population, statistically significant differences were found only when whole models were compared. The source code of our software is freely available for non-commercial use1. Contact: d.plewczynski@cent.uw.edu.pl. PMID:28066226

  5. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  6. An optimized Leave One Out approach to efficiently identify outliers

    Science.gov (United States)

    Biagi, L.; Caldera, S.; Perego, D.

    2012-04-01

    contribution of each subvector is subtracted from the batch result by algebraic decompositions, with a minimal computational effort: this holds for the parameters, the a posteriori residuals and the variance. Therefore all the n subvectors of residuals can be checked. The algorithm provides exactly the same results of the usual LOO but it is significantly faster, because it does not require any iteration of the adjustment. In some way, this is an inverse application of the well known sequential LS where the parameters are estimated sequentially by adding the contribution of new observations as they are available. In the presentation, the optimized LOO is discussed. Its application to a very simple example of a levelling network is discussed and compared to the usual approaches for outliers identification, in view of a further study for the application to the real time quality check of positioning services.

  7. A computationally efficient denoising and hole-filling method for depth image enhancement

    Science.gov (United States)

    Liu, Soulan; Chen, Chen; Kehtarnavaz, Nasser

    2016-04-01

    Depth maps captured by Kinect depth cameras are being widely used for 3D action recognition. However, such images often appear noisy and contain missing pixels or black holes. This paper presents a computationally efficient method for both denoising and hole-filling in depth images. The denoising is achieved by utilizing a combination of Gaussian kernel filtering and anisotropic filtering. The hole-filling is achieved by utilizing a combination of morphological filtering and zero block filtering. Experimental results using the publicly available datasets are provided indicating the superiority of the developed method in terms of both depth error and computational efficiency compared to three existing methods.

  8. Combined analytical FEM approach for efficient simulation of Lamb wave damage detection.

    Science.gov (United States)

    Shen, Yanfeng; Giurgiutiu, Victor

    2016-07-01

    Lamb waves have been widely explored as a promising inspection tool for non-destructive evaluation (NDE) and structural health monitoring (SHM). This article presents a combined analytical finite element model (FEM) approach (CAFA) for the accurate, efficient, and versatile simulation of 2-D Lamb wave propagation and interaction with damage. CAFA used a global analytical solution to model wave generation, propagation, scattering, mode conversion, and detection, while the wave-damage interaction coefficients (WDICs) were extracted from harmonic analysis of local FEM with non-reflective boundaries (NRB). The analytical procedure was coded using MATLAB, and a predictive simulation tool called WaveFormRevealer 2-D was developed. The methodology of obtaining WDICs from local FEM was presented. Case studies were carried out for Lamb wave propagation in a pristine plate and a damaged plate. CAFA predictions compared well with full scale multi-physics FEM simulations and experiments with scanning laser Doppler vibrometry (SLDV), while achieving remarkable performance in computational efficiency and computer resource saving compared with conventional FEM.

  9. An efficient approach to calculating Wannier states and extension to inhomogeneous systems

    Energy Technology Data Exchange (ETDEWEB)

    Bissbort, Ulf; Hofstetter, Walter [ITP, Goethe-Universitaet Frankfurt (Germany)

    2013-07-01

    Wannier states are a fundamental and central constituent to the construction of many-body models, as they are restricted to the single-particle Hilbert subspace of the respective band, while minimizing the spatial spread. Although simple in their initial definition as discrete Fourier transforms of the Bloch states, their actual computation amounts to a non-trivial high-dimensional minimization problem of the spatial variance as a complex phases of the single-particle Bloch state. Various involved techniques have been devised to efficiently treat this minimization problem, which quickly becomes numerically demanding for all but the simplest lattice geometries. We present an alternative approach, which allows for an efficient numerical calculation of the maximally localized Wannier states and entirely circumvents the pitfalls associated with the minimization technique, such as getting stuck in local minima. The computational effort scales favorably with increasing dimensions and lattice geometries in comparison to the minimization technique. Furthermore it allows for the first clear and unambiguous definition of Wannier states in inhomogeneous systems.

  10. An Information-Theoretic Approach for Energy-Efficient Collaborative Tracking in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Arienzo Loredana

    2010-01-01

    Full Text Available The problem of collaborative tracking of mobile nodes in wireless sensor networks is addressed. By using a novel metric derived from the energy model in LEACH (W.B. Heinzelman, A.P. Chandrakasan and H. Balakrishnan, Energy-Efficient Communication Protocol for Wireless Microsensor Networks, in: Proceedings of the 33rd Hawaii International Conference on System Sciences (HICSS '00, 2000 and aiming at an efficient resource solution, the approach adopts a strategy of combining target tracking with node selection procedures in order to select informative sensors to minimize the energy consumption of the tracking task. We layout a cluster-based architecture to address the limitations in computational power, battery capacity and communication capacities of the sensor devices. The computation of the posterior Cramer-Rao bound (PCRB based on received signal strength measurements has been considered. To track mobile nodes two particle filters are used: the bootstrap particle filter and the unscented particle filter, both in the centralized and in the distributed manner. Their performances are compared with the theoretical lower bound PCRB. To save energy, a node selection procedure based on greedy algorithms is proposed. The node selection problem is formulated as a cross-layer optimization problem and it is solved using greedy algorithms.

  11. Energy-Efficient Computational Chemistry: Comparison of x86 and ARM Systems.

    Science.gov (United States)

    Keipert, Kristopher; Mitra, Gaurav; Sunriyal, Vaibhav; Leang, Sarom S; Sosonkina, Masha; Rendell, Alistair P; Gordon, Mark S

    2015-11-10

    The computational efficiency and energy-to-solution of several applications using the GAMESS quantum chemistry suite of codes is evaluated for 32-bit and 64-bit ARM-based computers, and compared to an x86 machine. The x86 system completes all benchmark computations more quickly than either ARM system and is the best choice to minimize time to solution. The ARM64 and ARM32 computational performances are similar to each other for Hartree-Fock and density functional theory energy calculations. However, for memory-intensive second-order perturbation theory energy and gradient computations the lower ARM32 read/write memory bandwidth results in computation times as much as 86% longer than on the ARM64 system. The ARM32 system is more energy efficient than the x86 and ARM64 CPUs for all benchmarked methods, while the ARM64 CPU is more energy efficient than the x86 CPU for some core counts and molecular sizes.

  12. A hybrid model for the computationally-efficient simulation of the cerebellar granular layer

    Directory of Open Access Journals (Sweden)

    Anna eCattani

    2016-04-01

    Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.

  13. Computational Approach to Diarylprolinol-Silyl Ethers in Aminocatalysis.

    Science.gov (United States)

    Halskov, Kim Søholm; Donslund, Bjarke S; Paz, Bruno Matos; Jørgensen, Karl Anker

    2016-05-17

    Asymmetric organocatalysis has witnessed a remarkable development since its "re-birth" in the beginning of the millenium. In this rapidly growing field, computational investigations have proven to be an important contribution for the elucidation of mechanisms and rationalizations of the stereochemical outcomes of many of the reaction concepts developed. The improved understanding of mechanistic details has facilitated the further advancement of the field. The diarylprolinol-silyl ethers have since their introduction been one of the most applied catalysts in asymmetric aminocatalysis due to their robustness and generality. Although aminocatalytic methods at first glance appear to follow relatively simple mechanistic principles, more comprehensive computational studies have shown that this notion in some cases is deceiving and that more complex pathways might be operating. In this Account, the application of density functional theory (DFT) and other computational methods on systems catalyzed by the diarylprolinol-silyl ethers is described. It will be illustrated how computational investigations have shed light on the structure and reactivity of important intermediates in aminocatalysis, such as enamines and iminium ions formed from aldehydes and α,β-unsaturated aldehydes, respectively. Enamine and iminium ion catalysis can be classified as HOMO-raising and LUMO-lowering activation modes. In these systems, the exclusive reactivity through one of the possible intermediates is often a requisite for achieving high stereoselectivity; therefore, the appreciation of subtle energy differences has been vital for the efficient development of new stereoselective reactions. The diarylprolinol-silyl ethers have also allowed for novel activation modes for unsaturated aldehydes, which have opened up avenues for the development of new remote functionalization reactions of poly-unsaturated carbonyl compounds via di-, tri-, and tetraenamine intermediates and vinylogous iminium ions

  14. Heuristic approaches for energy-efficient shared restoration in WDM networks

    Science.gov (United States)

    Alilou, Shahab

    In recent years, there has been ongoing research on the design of energy-efficient Wavelength Division Multiplexing (WDM) networks. The explosive growth of Internet traffic has led to increased power consumption of network components. Network survivability has also been a relevant research topic, as it plays a crucial role in assuring continuity of service with no disruption, regardless of network component failure. Network survivability mechanisms tend to utilize considerable resources such as spare capacity in order to protect and restore information. This thesis investigates techniques for reducing energy demand and enhancing energy efficiency in the context of network survivability. We propose two novel heuristic energy-efficient shared protection approaches for WDM networks. These approaches intend to save energy by setting on sleep mode devices that are not being used while providing shared backup paths to satisfy network survivability. The first approach exploits properties of a math series in order to assign weight to the network links. It aims at reducing power consumption at the network indirectly by aggregating traffic on a set of nodes and links with high traffic load level. Routing traffic on links and nodes that are already under utilization makes it possible for the links and nodes with no load to be set on sleep mode. The second approach is intended to dynamically route traffic through nodes and links with high traffic load level. Similar to the first approach, this approach computes a pair of paths for every newly arrived demand. It computes these paths for every new demand by comparing the power consumption of nodes and links in the network before the demand arrives with their potential power consumption if they are chosen along the paths of this demand. Simulations of two different networks were used to compare the total network power consumption obtained using the proposed techniques against a standard shared-path restoration scheme. Shared

  15. Simple a posteriori slope limiter (Post Limiter) for high resolution and efficient flow computations

    Science.gov (United States)

    Kitamura, Keiichi; Hashimoto, Atsushi

    2017-07-01

    A simple and efficient a posteriori slope limiter (;Post Limiter;) is proposed for compressible Navier-Stokes and Euler equations, and examined in 1D and 2D. The Post Limiter tries to employ un-limited solutions where and when possible (even at shocks), and blend the un-limited and (1st-order) limited solutions smoothly, leading to equivalently four times resolution in 1D. This idea was inspired by a posteriori limiting approaches originally developed by Clain et al. (2011) [18] for higher-order flow computations, but proposed here is an alternative suitable and simplified for 2nd-order spatial accuracy with improved both solution and convergence. In fact, any iteration processes are no longer required to determine optimal orders of accuracy, since the limited and un-limited values are available at one time at 2nd-order. In 2D, several numerical examples have been dealt with, and both the κ = 1 / 3 MUSCL (in a structured solver) and Green-Gauss (in an unstructured solver) reconstructions demonstrated resolution improvement (nearly 4 × 4 times), convergence acceleration, and removal of numerical noises. Even on triangular meshes (on which least-squares reconstruction is used), the unstructured solver showed the improved solutions if cell geometries (cell-orientation angles) are properly taken into account. Therefore, the Post Limiter is readily incorporated into existing codes.

  16. Energy-efficient and security-optimized AES hardware design for ubiquitous computing

    Institute of Scientific and Technical Information of China (English)

    Chen Yicheng; Zou Xuecheng; Liu Zhenglin; Han Yu; Zheng Zhaoxia

    2008-01-01

    Ubiquitous computing must incorporate a certain level of security.For the severely resource con-strained applications,the energy-efficient and small size cryptography algorithm implementation is a critical problem.Hardware implementations of the advanced encryption standard(AES)for authentication and encryption are presented.An energy consumption variable is derived to evaluate low-power design strategies for battery-powered devices.It proves that compact AES architectures fail to optimize the AES hardware energy,whereas reducing invalid switching activities and implementing power-optimized sub-modules are the reasonable methods.Implemen tations of different substitution box(S-Boxes)structures are presented with 0.25 μm 1.8 V CMOS(complementary metal oxide semiconductor)standard cell library.The comparisons and trade-offs among area,security,and power are explored.The experimental results show that Galois field composite S-Boxes have smaller size and higheat security but consume considerably more power,whereas decoder-switch-encoder S-Boxes have the best power characteristics with disadvantages in terms of size and security.The combination of these two type S-Boxes instead of homogeneous S-Boxes in AES circuit will lead to optimal schemes.The technique of latch-dividing data path is analyzed,and the quantitative simulation results demonstrate that this approach diminishes the glitches effectively at a very low hardware cost.

  17. An Efficient Multi-Scale Modelling Approach for ssDNA Motion in Fluid Flow

    Institute of Scientific and Technical Information of China (English)

    M.Benke; E.Shapiro; D.Drikakis

    2008-01-01

    The paper presents a multi-scale modelling approach for simulating macromolecules in fluid flows. Macromolecule transport at low number densities is frequently encountered in biomedical devices, such as separators, detection and analysis systems. Accurate modelling of this process is challenging due to the wide range of physical scales involved. The continuum approach is not valid for low solute concentrations, but the large timescales of the fluid flow make purely molecular simulations prohibitively expensive. A promising multi-scale modelling strategy is provided by the meta-modelling approach considered in this paper. Meta-models are based on the coupled solution of fluid flow equations and equations of motion for a simplified mechanical model of macromolecules. The approach enables simulation of individual macromolecules at macroscopic time scales. Meta-models often rely on particle-corrector algorithms, which impose length constraints on the mechanical model. Lack of robustness of the particle-corrector algorithm employed can lead to slow convergence and numerical instability. A new FAst Linear COrrector (FALCO) algorithm is introduced in this paper, which significantly improves computational efficiency in comparison with the widely used SHAKE algorithm. Validation of the new particle corrector against a simple analytic solution is performed and improved convergence is demonstrated for ssDNA motion in a lid-driven micro-cavity.

  18. A Novel Computer-Aided Approach for Parametric Investigation of Custom Design of Fracture Fixation Plates.

    Science.gov (United States)

    Chen, Xiaozhong; He, Kunjin; Chen, Zhengming

    2017-01-01

    The present study proposes an integrated computer-aided approach combining femur surface modeling, fracture evidence recover plate creation, and plate modification in order to conduct a parametric investigation of the design of custom plate for a specific patient. The study allows for improving the design efficiency of specific plates on the patients' femur parameters and the fracture information. Furthermore, the present approach will lead to exploration of plate modification and optimization. The three-dimensional (3D) surface model of a detailed femur and the corresponding fixation plate were represented with high-level feature parameters, and the shape of the specific plate was recursively modified in order to obtain the optimal plate for a specific patient. The proposed approach was tested and verified on a case study, and it could be helpful for orthopedic surgeons to design and modify the plate in order to fit the specific femur anatomy and the fracture information.

  19. A Novel Computer-Aided Approach for Parametric Investigation of Custom Design of Fracture Fixation Plates

    Science.gov (United States)

    2017-01-01

    The present study proposes an integrated computer-aided approach combining femur surface modeling, fracture evidence recover plate creation, and plate modification in order to conduct a parametric investigation of the design of custom plate for a specific patient. The study allows for improving the design efficiency of specific plates on the patients' femur parameters and the fracture information. Furthermore, the present approach will lead to exploration of plate modification and optimization. The three-dimensional (3D) surface model of a detailed femur and the corresponding fixation plate were represented with high-level feature parameters, and the shape of the specific plate was recursively modified in order to obtain the optimal plate for a specific patient. The proposed approach was tested and verified on a case study, and it could be helpful for orthopedic surgeons to design and modify the plate in order to fit the specific femur anatomy and the fracture information. PMID:28203270

  20. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    Science.gov (United States)

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. Copyright © 2016 the American Physiological Society.