Energy Technology Data Exchange (ETDEWEB)
Flynn, Charles Joseph [QM Power, Inc., Kansas City, MO (United States)
2018-02-13
failure prone capacitors from the power stage. Q-Sync’s simpler electronics also result in higher efficiency because it eliminates the power required by the PCB to perform the obviated power conversions and PWM processes after line synchronous operating speed is reached in the first 5 seconds of operation, after which the PWM circuits drop out and a much less energy intensive “pass through” circuit takes over, allowing the grid-supplied AC power to sustain the motor’s ongoing operation.
Efficient computation of hashes
International Nuclear Information System (INIS)
Lopes, Raul H C; Franqueira, Virginia N L; Hobson, Peter R
2014-01-01
The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.
Efficient computation of argumentation semantics
Liao, Beishui
2013-01-01
Efficient Computation of Argumentation Semantics addresses argumentation semantics and systems, introducing readers to cutting-edge decomposition methods that drive increasingly efficient logic computation in AI and intelligent systems. Such complex and distributed systems are increasingly used in the automation and transportation systems field, and particularly autonomous systems, as well as more generic intelligent computation research. The Series in Intelligent Systems publishes titles that cover state-of-the-art knowledge and the latest advances in research and development in intelligen
Higher-order techniques in computational electromagnetics
Graglia, Roberto D
2016-01-01
Higher-Order Techniques in Computational Electromagnetics explains 'high-order' techniques that can significantly improve the accuracy, computational cost, and reliability of computational techniques for high-frequency electromagnetics, such as antennas, microwave devices and radar scattering applications.
Cost Efficiency in Public Higher Education.
Robst, John
This study used the frontier cost function framework to examine cost efficiency in public higher education. The frontier cost function estimates the minimum predicted cost for producing a given amount of output. Data from the annual Almanac issues of the "Chronicle of Higher Education" were used to calculate state level enrollments at two-year and…
GATE: Improving the computational efficiency
International Nuclear Information System (INIS)
Staelens, S.; De Beenhouwer, J.; Kruecker, D.; Maigne, L.; Rannou, F.; Ferrer, L.; D'Asseler, Y.; Buvat, I.; Lemahieu, I.
2006-01-01
GATE is a software dedicated to Monte Carlo simulations in Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET). An important disadvantage of those simulations is the fundamental burden of computation time. This manuscript describes three different techniques in order to improve the efficiency of those simulations. Firstly, the implementation of variance reduction techniques (VRTs), more specifically the incorporation of geometrical importance sampling, is discussed. After this, the newly designed cluster version of the GATE software is described. The experiments have shown that GATE simulations scale very well on a cluster of homogeneous computers. Finally, an elaboration on the deployment of GATE on the Enabling Grids for E-Science in Europe (EGEE) grid will conclude the description of efficiency enhancement efforts. The three aforementioned methods improve the efficiency of GATE to a large extent and make realistic patient-specific overnight Monte Carlo simulations achievable
Implementation of cloud computing in higher education
Asniar; Budiawan, R.
2016-04-01
Cloud computing research is a new trend in distributed computing, where people have developed service and SOA (Service Oriented Architecture) based application. This technology is very useful to be implemented, especially for higher education. This research is studied the need and feasibility for the suitability of cloud computing in higher education then propose the model of cloud computing service in higher education in Indonesia that can be implemented in order to support academic activities. Literature study is used as the research methodology to get a proposed model of cloud computing in higher education. Finally, SaaS and IaaS are cloud computing service that proposed to be implemented in higher education in Indonesia and cloud hybrid is the service model that can be recommended.
Retrofitting the 5045 Klystron for Higher Efficiency
International Nuclear Information System (INIS)
Jensen, Aaron; Fazio, Michael; Haase, Andy; Jongewaard, Erik; Kemp, Mark; Neilson, Jeff
2015-01-01
The 5045 klystron has been in production and accelerating particles at SLAC National Accelerator Laboratory for over 25 years. Although the design has undergone some changes there are still significant opportunities for improvement in performance. Retrofitting the 5045 for higher efficiencies and a more mono-energetic spent beam profile is presented.
Efficient computation of Laguerre polynomials
A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)
2017-01-01
textabstractAn efficient algorithm and a Fortran 90 module (LaguerrePol) for computing Laguerre polynomials . Ln(α)(z) are presented. The standard three-term recurrence relation satisfied by the polynomials and different types of asymptotic expansions valid for . n large and . α small, are used
Efficient Secure Multiparty Subset Computation
Directory of Open Access Journals (Sweden)
Sufang Zhou
2017-01-01
Full Text Available Secure subset problem is important in secure multiparty computation, which is a vital field in cryptography. Most of the existing protocols for this problem can only keep the elements of one set private, while leaking the elements of the other set. In other words, they cannot solve the secure subset problem perfectly. While a few studies have addressed actual secure subsets, these protocols were mainly based on the oblivious polynomial evaluations with inefficient computation. In this study, we first design an efficient secure subset protocol for sets whose elements are drawn from a known set based on a new encoding method and homomorphic encryption scheme. If the elements of the sets are taken from a large domain, the existing protocol is inefficient. Using the Bloom filter and homomorphic encryption scheme, we further present an efficient protocol with linear computational complexity in the cardinality of the large set, and this is considered to be practical for inputs consisting of a large number of data. However, the second protocol that we design may yield a false positive. This probability can be rapidly decreased by reexecuting the protocol with different hash functions. Furthermore, we present the experimental performance analyses of these protocols.
Towards higher reliability of CMS computing facilities
International Nuclear Information System (INIS)
Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A
2012-01-01
The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.
Phosphorus Processing—Potentials for Higher Efficiency
Ludwig Hermann; Fabian Kraus; Ralf Hermann
2018-01-01
In the aftermath of the adoption of the Sustainable Development Goals (SDGs) and the Paris Agreement (COP21) by virtually all United Nations, producing more with less is imperative. In this context, phosphorus processing, despite its high efficiency compared to other steps in the value chain, needs to be revisited by science and industry. During processing, phosphorus is lost to phosphogypsum, disposed of in stacks globally piling up to 3–4 billion tons and growing by about 200 million ...
Phosphorus Processing—Potentials for Higher Efficiency
Directory of Open Access Journals (Sweden)
Ludwig Hermann
2018-05-01
Full Text Available In the aftermath of the adoption of the Sustainable Development Goals (SDGs and the Paris Agreement (COP21 by virtually all United Nations, producing more with less is imperative. In this context, phosphorus processing, despite its high efficiency compared to other steps in the value chain, needs to be revisited by science and industry. During processing, phosphorus is lost to phosphogypsum, disposed of in stacks globally piling up to 3–4 billion tons and growing by about 200 million tons per year, or directly discharged to the sea. Eutrophication, acidification, and long-term pollution are the environmental impacts of both practices. Economic and regulatory framework conditions determine whether the industry continues wasting phosphorus, pursues efficiency improvements or stops operations altogether. While reviewing current industrial practice and potentials for increasing processing efficiency with lower impact, the article addresses potentially conflicting goals of low energy and material use as well as Life Cycle Assessment (LCA as a tool for evaluating the relative impacts of improvement strategies. Finally, options by which corporations could pro-actively and credibly demonstrate phosphorus stewardship as well as options by which policy makers could enforce improvement without impairing business locations are discussed.
Power-efficient computer architectures recent advances
Själander, Magnus; Kaxiras, Stefanos
2014-01-01
As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp
Energy efficient distributed computing systems
Lee, Young-Choon
2012-01-01
The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005. From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems. These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems. This book brings together a group of outsta
Higher order correlations in computed particle distributions
International Nuclear Information System (INIS)
Hanerfeld, H.; Herrmannsfeldt, W.; Miller, R.H.
1989-03-01
The rms emittances calculated for beam distributions using computer simulations are frequently dominated by higher order aberrations. Thus there are substantial open areas in the phase space plots. It has long been observed that the rms emittance is not an invariant to beam manipulations. The usual emittance calculation removes the correlation between transverse displacement and transverse momentum. In this paper, we explore the possibility of defining higher order correlations that can be removed from the distribution to result in a lower limit to the realizable emittance. The intent is that by inserting the correct combinations of linear lenses at the proper position, the beam may recombine in a way that cancels the effects of some higher order forces. An example might be the non-linear transverse space charge forces which cause a beam to spread. If the beam is then refocused so that the same non-linear forces reverse the inward velocities, the resulting phase space distribution may reasonably approximate the original distribution. The approach to finding the location and strength of the proper lens to optimize the transported beam is based on work by Bruce Carlsten of Los Alamos National Laboratory. 11 refs., 4 figs
Efficient computation of spaced seeds
Directory of Open Access Journals (Sweden)
Ilie Silvana
2012-02-01
Full Text Available Abstract Background The most frequently used tools in bioinformatics are those searching for similarities, or local alignments, between biological sequences. Since the exact dynamic programming algorithm is quadratic, linear-time heuristics such as BLAST are used. Spaced seeds are much more sensitive than the consecutive seed of BLAST and using several seeds represents the current state of the art in approximate search for biological sequences. The most important aspect is computing highly sensitive seeds. Since the problem seems hard, heuristic algorithms are used. The leading software in the common Bernoulli model is the SpEED program. Findings SpEED uses a hill climbing method based on the overlap complexity heuristic. We propose a new algorithm for this heuristic that improves its speed by over one order of magnitude. We use the new implementation to compute improved seeds for several software programs. We compute as well multiple seeds of the same weight as MegaBLAST, that greatly improve its sensitivity. Conclusion Multiple spaced seeds are being successfully used in bioinformatics software programs. Enabling researchers to compute very fast high quality seeds will help expanding the range of their applications.
Efficient Resource Management in Cloud Computing
Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan
2015-01-01
Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...
Efficient quantum computing using coherent photon conversion.
Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A
2011-10-12
Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting
Higher-Order and Symbolic Computation
DEFF Research Database (Denmark)
Danvy, Olivier; Mason, Ian
2008-01-01
a series of implementaions that properly account for multiple invocations of the derivative-taking opeatro. In "Adapting Functional Programs to Higher-Order Logic," Scott Owens and Konrad Slind present a variety of examples of terminiation proofs of functional programs written in HOL proof systems. Since......-calculus programs, historically. The anaylsis determines the possible locations of ambients and mirrors the temporla sequencing of actions in the structure of types....
Efficient GPU-based skyline computation
DEFF Research Database (Denmark)
Bøgh, Kenneth Sejdenfaden; Assent, Ira; Magnani, Matteo
2013-01-01
The skyline operator for multi-criteria search returns the most interesting points of a data set with respect to any monotone preference function. Existing work has almost exclusively focused on efficiently computing skylines on one or more CPUs, ignoring the high parallelism possible in GPUs. In...
Computational efficiency for the surface renewal method
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
Efficient quantum computing with weak measurements
International Nuclear Information System (INIS)
Lund, A P
2011-01-01
Projective measurements with high quantum efficiency are often assumed to be required for efficient circuit-based quantum computing. We argue that this is not the case and show that the fact that they are not required was actually known previously but was not deeply explored. We examine this issue by giving an example of how to perform the quantum-ordering-finding algorithm efficiently using non-local weak measurements considering that the measurements used are of bounded weakness and some fixed but arbitrary probability of success less than unity is required. We also show that it is possible to perform the same computation with only local weak measurements, but this must necessarily introduce an exponential overhead.
Higher Education and Efficiency in Europe: A Comparative Analysis
Sánchez-Pérez, Rosario
2012-01-01
This paper analyses the efficiency of higher education in equalizing the feasible wages obtained for men and women in the labour market. To do that, It is estimated two stochastic frontiers. The first one measures the effect of higher education inside the group of men and women for six European countries. The results indicate that in Denmark,…
Efficient Multi-Party Computation over Rings
DEFF Research Database (Denmark)
Cramer, Ronald; Fehr, Serge; Ishai, Yuval
2003-01-01
Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented by ...... the usefulness of the above results by presenting a novel application of MPC over (non-field) rings to the round-efficient secure computation of the maximum function. Basic Research in Computer Science (www.brics.dk), funded by the Danish National Research Foundation.......Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented...... by (boolean or arithmetic) circuits over finite fields. We are motivated by two limitations of these techniques: – Generality. Existing protocols do not apply to computation over more general algebraic structures (except via a brute-force simulation of computation in these structures). – Efficiency. The best...
IMPACT OF ROMANIAN HIGHER EDUCATION FUNDING POLICY ON UNIVERSITY EFFICIENCY
Directory of Open Access Journals (Sweden)
CRETAN Georgiana Camelia
2015-07-01
Full Text Available The issues of higher education funding policy and university operating efficiency are hot points on the actual public agenda worldwide as the pressures exercised upon the public resources increased, especially in the aftermath of the last economic crisis. Concerned with the improvement of the funding mechanism through which government allocates the public funds in order to meet the national core objectives within the area of higher education, the policy makers adjusted the funding policy by diversifying the criteria used in distributing the funds to public universities. Thus, the aim of this research is to underline both the impact and the consequences the public funding patterns of higher education have on the relative efficiency of public funded higher education institutions, across time. Moreover, the research conducted aims to determine whether the changes occurred within the Romanian public funding methodology of higher education institutions improved the relative efficiency scores of public funded universities, before and after the economic crisis of 2008. Thus, on one hand we have underlined the changes brought to the Romanian public funding mechanism of higher education during the years of 2007, 2009 and 2010 compared to the year of 2006, using the content analysis, and on the other hand we assessed and compared the relative efficiency scores of each selected public funded university using a multiple input - multiple output linear programming model, by employing the Data Envelopment Analysis technique. The findings of the research undertaken emphasized that a more performance oriented funding mechanism improves the efficiency scores of public universities. The results of the research undertaken could be used either by the policy makers within the area of higher education or by the administrative management of public universities in order to correlate the funding with the results obtained and/or the objectives assumed by both the
Computer Architecture Techniques for Power-Efficiency
Kaxiras, Stefanos
2008-01-01
In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these
Computer Architecture for Energy Efficient SFQ
2014-08-27
IBM Corporation (T.J. Watson Research Laboratory) 1101 Kitchawan Road Yorktown Heights, NY 10598 -0000 2 ABSTRACT Number of Papers published in peer...accomplished during this ARO-sponsored project at IBM Research to identify and model an energy efficient SFQ-based computer architecture. The... IBM Windsor Blue (WB), illustrated schematically in Figure 2. The basic building block of WB is a "tile" comprised of a 64-bit arithmetic logic unit
ALTERNATIVE APPROACHES TO EFFICIENCY EVALUATION OF HIGHER EDUCATION INSTITUTIONS
Directory of Open Access Journals (Sweden)
Furková, Andrea
2013-09-01
Full Text Available Evaluation of efficiency and ranking of higher education institutions is very popular and important topic of public policy. The assessment of the quality of higher education institutions can stimulate positive changes in higher education. In this study we focus on assessment and ranking of Slovak economic faculties. We try to apply two different quantitative approaches for evaluation Slovak economic faculties - Stochastic Frontier Analysis (SFA as an econometric approach and PROMETHEE II as multicriteria decision making method. Via SFA we examine faculties’ success from scientific point of view, i.e. their success in area of publications and citations. Next part of analysis deals with assessing of Slovak economic sciences faculties from overall point of view through the multicriteria decision making method. In the analysis we employ panel data covering 11 economic faculties observed over the period of 5 years. Our main aim is to point out other quantitative approaches to efficiency estimation of higher education institutions.
Energy Efficiency of Higher Education Buildings: A Case Study
Soares, Nelson; Pereira, Luísa Dias; Ferreira, João; Conceição, Pedro; da Silva, Patrícia Pereira
2015-01-01
Purpose: This paper aims to propose an energy efficiency plan (with technical and behavioural improvement measures) for a Portuguese higher education building--the Teaching Building of the Faculty of Economics of the University of Coimbra (FEUC). Design/methodology/approach: The study was developed in the context of both the "Green…
A computationally efficient fuzzy control s
Directory of Open Access Journals (Sweden)
Abdel Badie Sharkawy
2013-12-01
Full Text Available This paper develops a decentralized fuzzy control scheme for MIMO nonlinear second order systems with application to robot manipulators via a combination of genetic algorithms (GAs and fuzzy systems. The controller for each degree of freedom (DOF consists of a feedforward fuzzy torque computing system and a feedback fuzzy PD system. The feedforward fuzzy system is trained and optimized off-line using GAs, whereas not only the parameters but also the structure of the fuzzy system is optimized. The feedback fuzzy PD system, on the other hand, is used to keep the closed-loop stable. The rule base consists of only four rules per each DOF. Furthermore, the fuzzy feedback system is decentralized and simplified leading to a computationally efficient control scheme. The proposed control scheme has the following advantages: (1 it needs no exact dynamics of the system and the computation is time-saving because of the simple structure of the fuzzy systems and (2 the controller is robust against various parameters and payload uncertainties. The computational complexity of the proposed control scheme has been analyzed and compared with previous works. Computer simulations show that this controller is effective in achieving the control goals.
Efficient computation method of Jacobian matrix
International Nuclear Information System (INIS)
Sasaki, Shinobu
1995-05-01
As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)
Computationally Efficient Prediction of Ionic Liquid Properties
DEFF Research Database (Denmark)
Chaban, V. V.; Prezhdo, O. V.
2014-01-01
Due to fundamental differences, room-temperature ionic liquids (RTIL) are significantly more viscous than conventional molecular liquids and require long simulation times. At the same time, RTILs remain in the liquid state over a much broader temperature range than the ordinary liquids. We exploit...... to ambient temperatures. We numerically prove the validity of the proposed concept for density and ionic diffusion of four different RTILs. This simple method enhances the computational efficiency of the existing simulation approaches as applied to RTILs by more than an order of magnitude....
Structured Parallel Programming Patterns for Efficient Computation
McCool, Michael; Robison, Arch
2012-01-01
Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th
Dimensioning storage and computing clusters for efficient High Throughput Computing
CERN. Geneva
2012-01-01
Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...
Computationally Efficient Clustering of Audio-Visual Meeting Data
Hung, Hayley; Friedland, Gerald; Yeo, Chuohao
This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.
Computer-Supported Collaborative Learning in Higher Education
Roberts, Tim, Ed.
2005-01-01
"Computer-Supported Collaborative Learning in Higher Education" provides a resource for researchers and practitioners in the area of computer-supported collaborative learning (also known as CSCL); particularly those working within a tertiary education environment. It includes articles of relevance to those interested in both theory and practice in…
Energy Efficiency in Computing (1/2)
CERN. Geneva
2016-01-01
As manufacturers improve the silicon process, truly low energy computing is becoming a reality - both in servers and in the consumer space. This series of lectures covers a broad spectrum of aspects related to energy efficient computing - from circuits to datacentres. We will discuss common trade-offs and basic components, such as processors, memory and accelerators. We will also touch on the fundamentals of modern datacenter design and operation. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic L...
An efficient higher order family of root finders
Petkovic, Ljiljana D.; Rancic, Lidija; Petkovic, Miodrag S.
2008-06-01
A one parameter family of iterative methods for the simultaneous approximation of simple complex zeros of a polynomial, based on a cubically convergent Hansen-Patrick's family, is studied. We show that the convergence of the basic family of the fourth order can be increased to five and six using Newton's and Halley's corrections, respectively. Since these corrections use the already calculated values, the computational efficiency of the accelerated methods is significantly increased. Further acceleration is achieved by applying the Gauss-Seidel approach (single-step mode). One of the most important problems in solving nonlinear equations, the construction of initial conditions which provide both the guaranteed and fast convergence, is considered for the proposed accelerated family. These conditions are computationally verifiable; they depend only on the polynomial coefficients, its degree and initial approximations, which is of practical importance. Some modifications of the considered family, providing the computation of multiple zeros of polynomials and simple zeros of a wide class of analytic functions, are also studied. Numerical examples demonstrate the convergence properties of the presented family of root-finding methods.
A primer on the energy efficiency of computing
Energy Technology Data Exchange (ETDEWEB)
Koomey, Jonathan G. [Research Fellow, Steyer-Taylor Center for Energy Policy and Finance, Stanford University (United States)
2015-03-30
The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.
Energy Efficiency in Computing (2/2)
CERN. Geneva
2016-01-01
We will start the second day of our energy efficient computing series with a brief discussion of software and the impact it has on energy consumption. A second major point of this lecture will be the current state of research and a few future technologies, ranging from mainstream (e.g. the Internet of Things) to exotic. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic Lectures are recorded. No webcast! Because of a problem of the recording equipment, this lecture will be repeated for recording pu...
Agasisti, Tommaso; Johnes, Geraint
2009-01-01
We employ Data Envelopment Analysis to compute the technical efficiency of Italian and English higher education institutions. Our results show that, in relation to the country-specific frontier, institutions in both countries are typically very efficient. However, institutions in England are more efficient than those in Italy when we compare…
Dimensioning storage and computing clusters for efficient high throughput computing
International Nuclear Information System (INIS)
Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E
2012-01-01
Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.
Energy efficiency interventions in UK higher education institutions
International Nuclear Information System (INIS)
Altan, Hasim
2010-01-01
This paper provides an insight into energy efficiency interventions studies, focusing on issues arising in UK higher education institutions (HEIs) in particular. Based on a review of the context for energy efficiency and carbon reduction programmes in the UK and the trends in higher education sector, existing external and internal policies and initiatives and their relevant issues are extensively discussed. To explore the efficacy of some internal intervention strategies, such as technical, non-technical and management interventions, a survey was conducted among UK higher education institutions between February and April 2008. Consultation responses show that there are a relatively high percentage of institutions (83%) that have embarked on both technical and non-technical initiatives, which is a demonstration to the joined-up approach in such area. Major barriers for intervention studies are also identified, including lack of methodology, non-clarity of energy demand and consumption issues, difficulty in establishing assessment boundaries, problems with regards to indices and their effectiveness and so on. Besides establishing clear targets for carbon reductions within the sector, it is concluded that it is important to develop systems for effectively measuring and evaluating the impact of different policies, regulations and schemes in the future as the first step to explore. - Research Highlights: → The research provides an insight into energy efficiency interventions studies, focusing particularly on issues arising in UK higher education institutions (HEIs). → Based on a review of the context for energy efficiency and carbon reduction programmes in the UK and the trends in higher education sector, existing external and internal policies and initiatives, and their relevant issues are extensively discussed. → To explore the efficacy of some internal intervention strategies, such as technical, non-technical and management interventions, a survey was conducted
ICT energy efficiency in higher education. Continuous measurement and monitoring
Energy Technology Data Exchange (ETDEWEB)
Ter Hofte, H. [Novay, Enschede (Netherlands)
2011-11-15
Power consumption of information and communications technology (ICT) is rising rapidly worldwide. Reducing (the growth in) energy demand helps to achieve sustainability goals in the area of energy resource depletion, energy security, economy, and ecology. Various governments and industry consortia have set out policies and agreements to reduce the (growth in) demand for energy. In the MJA3 agreements in the Netherlands, various organizations, including all 14 universities and 39 universities of applied sciences pledged to achieve 30% increase in energy efficiency in 2020 compared to 2005. In this report, we argue that using the number of kilowatt-hours of final electricity used for ICT per enrolled student per day (kWh/st/d), should be used as the primary metric for ICT energy efficiency in higher education. For other uses of electricity than ICT in higher education, we express electricity use in kilowatthours per person per day (kWh/p/d). Applying continuous monitoring and management of ICT energy is one approach one could take to increase ICT energy efficiency in education. In households, providing direct (i.e. real-time) feedback about energy use typically results in 5-15% energy savings, whereas indirect feedback (provided some time after consumption occurs), results in less energy savings, typically 0-10%. Continuous measurement of ICT electricity use can be done in a variety of ways. In this report, we distinguish and describe four major measurement approaches: (1) In-line meters, which require breaking the electrical circuit to install the meter; (2) clamp-on-meters, which can be wrapped around a wire; (3) add-ons to existing energy meters, which use analog or digital ports of existing energy meters; (4) software-only measurement, which uses existing network interfaces, protocols and APIs. A measurement approach can be used at one or more aggregation levels: at building level (to measure all electrical energy used in a building, e.g. a datacenter); at
Considerations for higher efficiency and productivity in research activities.
Forero, Diego A; Moore, Jason H
2016-01-01
There are several factors that are known to affect research productivity; some of them imply the need for large financial investments and others are related to work styles. There are some articles that provide suggestions for early career scientists (PhD students and postdocs) but few publications are oriented to professors about scientific leadership. As academic mentoring might be useful at all levels of experience, in this note we suggest several key considerations for higher efficiency and productivity in academic and research activities. More research is needed into the main work style features that differentiate highly productive scientists and research groups, as some of them could be innate and others could be transferable. As funding agencies, universities and research centers invest large amounts of money in order to have a better scientific productivity, a deeper understanding of these factors will be of high academic and societal impact.
Numerical aspects for efficient welding computational mechanics
Directory of Open Access Journals (Sweden)
Aburuga Tarek Kh.S.
2014-01-01
Full Text Available The effect of the residual stresses and strains is one of the most important parameter in the structure integrity assessment. A finite element model is constructed in order to simulate the multi passes mismatched submerged arc welding SAW which used in the welded tensile test specimen. Sequentially coupled thermal mechanical analysis is done by using ABAQUS software for calculating the residual stresses and distortion due to welding. In this work, three main issues were studied in order to reduce the time consuming during welding simulation which is the major problem in the computational welding mechanics (CWM. The first issue is dimensionality of the problem. Both two- and three-dimensional models are constructed for the same analysis type, shell element for two dimension simulation shows good performance comparing with brick element. The conventional method to calculate residual stress is by using implicit scheme that because of the welding and cooling time is relatively high. In this work, the author shows that it could use the explicit scheme with the mass scaling technique, and time consuming during the analysis will be reduced very efficiently. By using this new technique, it will be possible to simulate relatively large three dimensional structures.
Computer-aided voice training in higher education: participants ...
African Journals Online (AJOL)
The training of performance singing in a multi lingual, multi cultural educational context presents unique problems and requires inventive teaching strategies. Computer-aided training offers objective visual feedback of the voice production that can be implemented as a teaching aid in higher education. This article reports on ...
Efficiency assessment models of higher education institution staff activity
Directory of Open Access Journals (Sweden)
K. A. Dyusekeyev
2016-01-01
Full Text Available The paper substantiates the necessity of improvement of university staff incentive system under the conditions of competition in the field of higher education, the necessity to develop a separate model for the evaluation of the effectiveness of the department heads. The authors analysed the methods for assessing production function of units. The advantage of the application of the methods to assess the effectiveness of border economic structures in the field of higher education is shown. The choice of the data envelopment analysis method to solve the problem has proved. The model for evaluating of university departments activity on the basis of the DEAmethodology has developed. On the basis of operating in Russia, Kazakhstan and other countries universities staff pay systems the structure of the criteria system for university staff activity evaluation has been designed. For clarification and specification of the departments activity efficiency criteria a strategic map has been developed that allowed us to determine the input and output parameters of the model. DEA-methodology using takes into account a large number of input and output parameters, increases the assessment objectivity by excluding experts, receives interim data to identify the strengths and weaknesses of the evaluated object.
Higher-Order Integral Equation Methods in Computational Electromagnetics
DEFF Research Database (Denmark)
Jørgensen, Erik; Meincke, Peter
Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...
Quantum Computing and the Limits of the Efficiently Computable
CERN. Geneva
2015-01-01
I'll discuss how computational complexity---the study of what can and can't be feasibly computed---has been interacting with physics in interesting and unexpected ways. I'll first give a crash course about computer science's P vs. NP problem, as well as about the capabilities and limits of quantum computers. I'll then touch on speculative models of computation that would go even beyond quantum computers, using (for example) hypothetical nonlinearities in the Schrodinger equation. Finally, I'll discuss BosonSampling ---a proposal for a simple form of quantum computing, which nevertheless seems intractable to simulate using a classical computer---as well as the role of computational complexity in the black hole information puzzle.
Efficiently outsourcing multiparty computation under multiple keys
Peter, Andreas; Tews, Erik; Tews, Erik; Katzenbeisser, Stefan
2013-01-01
Secure multiparty computation enables a set of users to evaluate certain functionalities on their respective inputs while keeping these inputs encrypted throughout the computation. In many applications, however, outsourcing these computations to an untrusted server is desirable, so that the server
A computationally efficient approach for template matching-based ...
Indian Academy of Sciences (India)
In this paper, a new computationally efficient image registration method is ...... the proposed method requires less computational time as compared to traditional methods. ... Zitová B and Flusser J 2003 Image registration methods: A survey.
Selectively Fortifying Reconfigurable Computing Device to Achieve Higher Error Resilience
Directory of Open Access Journals (Sweden)
Mingjie Lin
2012-01-01
Full Text Available With the advent of 10 nm CMOS devices and “exotic” nanodevices, the location and occurrence time of hardware defects and design faults become increasingly unpredictable, therefore posing severe challenges to existing techniques for error-resilient computing because most of them statically assign hardware redundancy and do not account for the error tolerance inherently existing in many mission-critical applications. This work proposes a novel approach to selectively fortifying a target reconfigurable computing device in order to achieve hardware-efficient error resilience for a specific target application. We intend to demonstrate that such error resilience can be significantly improved with effective hardware support. The major contributions of this work include (1 the development of a complete methodology to perform sensitivity and criticality analysis of hardware redundancy, (2 a novel problem formulation and an efficient heuristic methodology to selectively allocate hardware redundancy among a target design’s key components in order to maximize its overall error resilience, and (3 an academic prototype of SFC computing device that illustrates a 4 times improvement of error resilience for a H.264 encoder implemented with an FPGA device.
Energy efficient hybrid computing systems using spin devices
Sharad, Mrigank
Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.
Role of computational efficiency in process simulation
Directory of Open Access Journals (Sweden)
Kurt Strand
1989-07-01
Full Text Available It is demonstrated how efficient numerical algorithms may be combined to yield a powerful environment for analysing and simulating dynamic systems. The importance of using efficient numerical algorithms is emphasized and demonstrated through examples from the petrochemical industry.
MODEL TESTING OF LOW PRESSURE HYDRAULIC TURBINE WITH HIGHER EFFICIENCY
Directory of Open Access Journals (Sweden)
V. K. Nedbalsky
2007-01-01
Full Text Available A design of low pressure turbine has been developed and it is covered by an invention patent and a useful model patent. Testing of the hydraulic turbine model has been carried out when it was installed on a vertical shaft. The efficiency was equal to 76–78 % that exceeds efficiency of the known low pressure blade turbines.
The Ability of implementing Cloud Computing in Higher Education - KRG
Directory of Open Access Journals (Sweden)
Zanyar Ali Ahmed
2017-06-01
Full Text Available Cloud computing is a new technology. CC is an online service can store and retrieve information, without the requirement for physical access to the files on hard drives. The information is available on a system, server where it can be accessed by clients when it’s needed. Lack of the ICT infrastructure of universities of the Kurdistan Regional Government (KRG can use this new technology, because of economical advantages, enhanced data managements, better maintenance, high performance, improve availability and accessibility therefore achieving an easy maintenance of organizational institutes. The aim of this research is to find the ability and possibility to implement the cloud computing in higher education of the KRG. This research will help the universities to start establishing a cloud computing in their services. A survey has been conducted to evaluate the CC services that have been applied to KRG universities have by using cloud computing services. The results showed that the most of KRG universities are using SaaS. MHE-KRG universities and institutions are confronting many challenges and concerns in term of security, user privacy, lack of integration with current systems, and data and documents ownership.
Efficient Parallel Engineering Computing on Linux Workstations
Lou, John Z.
2010-01-01
A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).
Computation of the efficiency distribution of a multichannel focusing collimator
International Nuclear Information System (INIS)
Balasubramanian, A.; Venkateswaran, T.V.
1977-01-01
This article describes two computer methods of calculating the point source efficiency distribution functions of a focusing collimator with round tapered holes. The first method which computes only the geometric efficiency distribution is adequate for low energy collimators while the second method which computes both geometric and penetration efficiencies can be made use of for medium and high energy collimators. The scatter contribution to the efficiency is not taken into account. In the first method the efficiency distribution of a single cone of the collimator is obtained and the data are used for computing the distribution of the whole collimator. For high energy collimator the entire detector region is imagined to be divided into elemental areas. Efficiency of the elemental area is computed after suitably weighting for the penetration within the collimator septa, which is determined by three dimensional geometric techniques. The method of computing the line source efficiency distribution from point source distribution is also explained. The formulations have been tested by computing the efficiency distribution of several commercial collimators and collimators fabricated by us. (Auth.)
Computationally efficient prediction of area per lipid
DEFF Research Database (Denmark)
Chaban, Vitaly V.
2014-01-01
dynamics increases exponentially with respect to temperature. APL dependence on temperature is linear over an entire temperature range. I provide numerical evidence that thermal expansion coefficient of a lipid bilayer can be computed at elevated temperatures and extrapolated to the temperature of interest...
Efficient multigrid computation of steady hypersonic flows
Koren, B.; Hemker, P.W.; Murthy, T.K.S.
1991-01-01
In steady hypersonic flow computations, Newton iteration as a local relaxation procedure and nonlinear multigrid iteration as an acceleration procedure may both easily fail. In the present chapter, same remedies are presented for overcoming these problems. The equations considered are the steady,
Efficient Computations and Representations of Visible Surfaces.
1979-12-01
position as stated. The smooth contour generator may lie along a sharp ridge, for instance. Richards & Stevens -28- 6m lace contout s ?S ,.......... ceoonec...From understanding computation to understanding neural circuitry. Neurosci. Res. Prog. Bull. 13. 470-488. Metelli, F. 1970 An algebraic development of
Synthesis of Efficient Structures for Concurrent Computation.
1983-10-01
formal presentation of these techniques, called virtualisation and aggregation, can be found n [King-83$. 113.2 Census Functions Trees perform broadcast... Functions .. .. .. .. ... .... ... ... .... ... ... ....... 6 4 User-Assisted Aggregation .. .. .. .. ... ... ... .... ... .. .......... 6 5 Parallel...6. Simple Parallel Structure for Broadcasting .. .. .. .. .. . ... .. . .. . .... 4 Figure 7. Internal Structure of a Prefix Computation Network
Computationally efficient methods for digital control
Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.
2008-01-01
The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these
Efficient analytic computation of higher-order QCD amplitudes
International Nuclear Information System (INIS)
Bern, Z.; Chalmers, G.; Dunbar, D.C.; Kosower, D.A.
1995-01-01
The authors review techniques simplifying the analytic calculation of one-loop QCD amplitudes with many external legs, for use in next-to-leading-order corrections to multi-jet processes. Particularly useful are the constraints imposed by perturbative unitarity, collinear singularities and a supersymmetry-inspired organization of helicity amplitudes. Certain sequences of one-loop helicity amplitudes with an arbitrary number of external gluons have been obtained using these constraints
HTR plus modern turbine technology for higher efficiencies
International Nuclear Information System (INIS)
Barnert, H.; Kugeler, K.
1996-01-01
The recent efficiency race for natural gas fired power plants with gas-plus steam-turbine-cycle, is shortly reviewed. The question 'can the HTR compete with high efficiencies?' is answered: Yes, it can - in principle. The gas-plus steam-turbine cycle, also called combi-cycle, is proposed to be taken into consideration here. A comparative study on the efficiency potential is made; it yields 54.5% at 1,050 deg. C gas turbine-inlet temperature. The mechanisms of release versus temperature in the HTR are summarized from the safety report of the HTR MODUL. A short reference is made to the experiences from the HTR-Helium Turbine Project HHT, which was performed in the Federal Republic of Germany in 1968 to 1981. (author). 8 figs,. 1 tab
HTR plus modern turbine technology for higher efficiencies
Energy Technology Data Exchange (ETDEWEB)
Barnert, H; Kugeler, K [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Sicherheitsforschung und Reaktortechnik
1996-08-01
The recent efficiency race for natural gas fired power plants with gas-plus steam-turbine-cycle, is shortly reviewed. The question `can the HTR compete with high efficiencies?` is answered: Yes, it can - in principle. The gas-plus steam-turbine cycle, also called combi-cycle, is proposed to be taken into consideration here. A comparative study on the efficiency potential is made; it yields 54.5% at 1,050 deg. C gas turbine-inlet temperature. The mechanisms of release versus temperature in the HTR are summarized from the safety report of the HTR MODUL. A short reference is made to the experiences from the HTR-Helium Turbine Project HHT, which was performed in the Federal Republic of Germany in 1968 to 1981. (author). 8 figs,. 1 tab.
Efficient Computer Implementations of Fast Fourier Transforms.
1980-12-01
fit in computer? Yes, continue (9) Determine fastest algorithm between WFTA and PFA from Table 4.6. For N=420, WFTA PFA Mult 1296 2528 Add 11352 10956...real adds = 24tN/4 + 2(3tN/4) = 15tN/2 (G.8) 260 All odd prime C<ictors ciual to or (,rater than 5 iso the general transform section. Based on the
COBRE Research Workshop on Higher Education: Equity and Efficiency.
Chicago Univ., IL.
This document comprises 8 papers presented at the COBRE Research Workshop on Higher Education. The papers are: (1) "Schooling and Equality from Generation to Generation;" (2) "Time Series Changes in Personal Income Inequality: The United States Experience, 1939 to 1985;" (3) "Education, Income, and Ability;" (4) "Proposals for Financing Higher…
Exploiting Software Tool Towards Easier Use And Higher Efficiency
Lin, G. H.; Su, J. T.; Deng, Y. Y.
2006-08-01
In developing countries, using data based on instrument made by themselves in maximum extent is very important. It is not only related to maximizing science returns upon prophase investment -- deep accumulations in every aspects but also science output. Based on the idea, we are exploiting a software (called THDP: Tool of Huairou Data Processing). It is used for processing a series of issues, which is met necessary in processing data. This paper discusses its designed purpose, functions, method and specialities. The primary vehicle for general data interpretation is through various techniques of data visualization, techniques of interactive. In the software, we employed Object Oriented approach. It is appropriate to the vehicle. it is imperative that the approach provide not only function, but do so in as convenient a fashion as possible. As result of the software exploiting, it is not only easier to learn data processing for beginner and more convenienter to need further improvement for senior but also increase greatly efficiency in every phrases include analyse, parameter adjusting, result display. Under frame of virtual observatory, for developing countries, we should study more and newer related technologies, which can advance ability and efficiency in science research, like the software we are developing
Energy efficiency of computer power supply units - Final report
Energy Technology Data Exchange (ETDEWEB)
Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)
2002-11-15
This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.
Cloud Computing in Higher Education Sector for Sustainable Development
Duan, Yuchao
2016-01-01
Cloud computing is considered a new frontier in the field of computing, as this technology comprises three major entities namely: software, hardware and network. The collective nature of all these entities is known as the Cloud. This research aims to examine the impacts of various aspects namely: cloud computing, sustainability, performance…
Computing with memory for energy-efficient robust systems
Paul, Somnath
2013-01-01
This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime. The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are de
Positive Wigner functions render classical simulation of quantum computation efficient.
Mari, A; Eisert, J
2012-12-07
We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.
Scripting intercultural computer-supported collaborative learning in higher education
Popov, V.
2013-01-01
Introduction of computer-supported collaborative learning (CSCL), specifically in an intercultural learning environment, creates both challenges and benefits. Among the challenges are the coordination of different attitudes, styles of communication, and patterns of behaving. Among the benefits are
On the efficient parallel computation of Legendre transforms
Inda, M.A.; Bisseling, R.H.; Maslen, D.K.
2001-01-01
In this article, we discuss a parallel implementation of efficient algorithms for computation of Legendre polynomial transforms and other orthogonal polynomial transforms. We develop an approach to the Driscoll-Healy algorithm using polynomial arithmetic and present experimental results on the
On the efficient parallel computation of Legendre transforms
Inda, M.A.; Bisseling, R.H.; Maslen, D.K.
1999-01-01
In this article we discuss a parallel implementation of efficient algorithms for computation of Legendre polynomial transforms and other orthogonal polynomial transforms. We develop an approach to the Driscoll-Healy algorithm using polynomial arithmetic and present experimental results on the
Computationally efficient clustering of audio-visual meeting data
Hung, H.; Friedland, G.; Yeo, C.; Shao, L.; Shan, C.; Luo, J.; Etoh, M.
2010-01-01
This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors,
Efficient Computation of Casimir Interactions between Arbitrary 3D Objects
International Nuclear Information System (INIS)
Reid, M. T. Homer; Rodriguez, Alejandro W.; White, Jacob; Johnson, Steven G.
2009-01-01
We introduce an efficient technique for computing Casimir energies and forces between objects of arbitrarily complex 3D geometries. In contrast to other recently developed methods, our technique easily handles nonspheroidal, nonaxisymmetric objects, and objects with sharp corners. Using our new technique, we obtain the first predictions of Casimir interactions in a number of experimentally relevant geometries, including crossed cylinders and tetrahedral nanoparticles.
Octopus: embracing the energy efficiency of handheld multimedia computers
Havinga, Paul J.M.; Smit, Gerardus Johannes Maria
1999-01-01
In the MOBY DICK project we develop and define the architecture of a new generation of mobile hand-held computers called Mobile Digital Companions. The Companions must meet several major requirements: high performance, energy efficient, a notion of Quality of Service (QoS), small size, and low
Pedrycz, Witold; Chen, Shyi-Ming
2011-01-01
Information granules are conceptual entities that aid the perception of complex phenomena. This book looks at granular computing techniques such as algorithmic pursuits and includes diverse applications and case studies from fields such as power engineering.
Special issue of Higher-Order and Symbolic Computation
DEFF Research Database (Denmark)
Danvy, Olivier
, they should have a large range of applicability for a large class of specifications or programs. Only general ideas could become the basis for an automatic system for program development. Bob’s APTS system is indeed the incarnation of most of the techniques he proposed (cf. Leonard and Heitmeyer...... specification, expressed in SCR notation, into C. Two translation strategies are discussed in the paper. Both were implemented using Bob Paige’s APTS programtransformation system. “Computational Divided Differencing and Divided-Difference Arithmetics” uses an approach conceptually similar to the Computational...
Special issue of Higher-Order and Symbolic Computation
DEFF Research Database (Denmark)
Danvy, Olivier; Sabry, Amr
This issue of HOSC is dedicated to the general topic of continuations. It grew out of the third ACM SIGPLAN Workshop on Continuations (CW'01), which took place in London, UK on January 16, 2001 [3]. The notion of continuation is ubiquitous in many different areas of computer science, including...... and streamline Filinski's earlier work in the previous special issue of HOSC (then LISP and Symbolic Computation) that grew out of the first ACM SIGPLAN Workshop on Continuations [1, 2]. Hasegawa and Kakutani's article is the journal version of an article presented at FOSSACS 2001 and that received the EATCS...
Exploring Issues about Computational Thinking in Higher Education
Czerkawski, Betul C.; Lyman, Eugene W., III
2015-01-01
The term computational thinking (CT) has been in academic discourse for decades, but gained new currency in 2006, when Jeanette Wing used it to describe a set of thinking skills that students in all fields may require in order to succeed. Wing's initial article and subsequent writings on CT have been broadly influential; experts in…
Efficient computation of clipped Voronoi diagram for mesh generation
Yan, Dongming
2013-04-01
The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.
Efficient computation of clipped Voronoi diagram for mesh generation
Yan, Dongming; Wang, Wen Ping; Lé vy, Bruno L.; Liu, Yang
2013-01-01
The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.
Computational identification of candidate nucleotide cyclases in higher plants
Wong, Aloysius Tze; Gehring, Christoph A
2013-01-01
In higher plants guanylyl cyclases (GCs) and adenylyl cyclases (ACs) cannot be identified using BLAST homology searches based on annotated cyclic nucleotide cyclases (CNCs) of prokaryotes, lower eukaryotes, or animals. The reason is that CNCs
Low rank approach to computing first and higher order derivatives using automatic differentiation
International Nuclear Information System (INIS)
Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.
2012-01-01
This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computing derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)
Computational identification of candidate nucleotide cyclases in higher plants
Wong, Aloysius Tze
2013-09-03
In higher plants guanylyl cyclases (GCs) and adenylyl cyclases (ACs) cannot be identified using BLAST homology searches based on annotated cyclic nucleotide cyclases (CNCs) of prokaryotes, lower eukaryotes, or animals. The reason is that CNCs are often part of complex multifunctional proteins with different domain organizations and biological functions that are not conserved in higher plants. For this reason, we have developed CNC search strategies based on functionally conserved amino acids in the catalytic center of annotated and/or experimentally confirmed CNCs. Here we detail this method which has led to the identification of >25 novel candidate CNCs in Arabidopsis thaliana, several of which have been experimentally confirmed in vitro and in vivo. We foresee that the application of this method can be used to identify many more members of the growing family of CNCs in higher plants. © Springer Science+Business Media New York 2013.
Efficient quantum circuits for one-way quantum computing.
Tanamoto, Tetsufumi; Liu, Yu-Xi; Hu, Xuedong; Nori, Franco
2009-03-13
While Ising-type interactions are ideal for implementing controlled phase flip gates in one-way quantum computing, natural interactions between solid-state qubits are most often described by either the XY or the Heisenberg models. We show an efficient way of generating cluster states directly using either the imaginary SWAP (iSWAP) gate for the XY model, or the sqrt[SWAP] gate for the Heisenberg model. Our approach thus makes one-way quantum computing more feasible for solid-state devices.
Directory of Open Access Journals (Sweden)
JongBeom Lim
2018-01-01
Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.
Energy-efficient computing and networking. Revised selected papers
Energy Technology Data Exchange (ETDEWEB)
Hatziargyriou, Nikos; Dimeas, Aris [Ethnikon Metsovion Polytechneion, Athens (Greece); Weidlich, Anke (eds.) [SAP Research Center, Karlsruhe (Germany); Tomtsi, Thomai
2011-07-01
This book constitutes the postproceedings of the First International Conference on Energy-Efficient Computing and Networking, E-Energy, held in Passau, Germany in April 2010. The 23 revised papers presented were carefully reviewed and selected for inclusion in the post-proceedings. The papers are organized in topical sections on energy market and algorithms, ICT technology for the energy market, implementation of smart grid and smart home technology, microgrids and energy management, and energy efficiency through distributed energy management and buildings. (orig.)
COMPUTER EXPERIMENTS WITH FINITE ELEMENTS OF HIGHER ORDER
Directory of Open Access Journals (Sweden)
Khomchenko A.
2017-12-01
Full Text Available The paper deals with the problem of constructing the basic functions of a quadrilateral finite element of the fifth order by the means of the computer algebra system Maple. The Lagrangian approximation of such a finite element contains 36 nodes: 20 nodes perimeter and 16 internal nodes. Alternative models with reduced number of internal nodes are considered. Graphs of basic functions and cognitive portraits of lines of zero level are presented. The work is aimed at studying the possibilities of using modern information technologies in the teaching of individual mathematical disciplines.
Computer-Mediated Assessment of Higher-Order Thinking Development
Tilchin, Oleg; Raiyn, Jamal
2015-01-01
Solving complicated problems in a contemporary knowledge-based society requires higher-order thinking (HOT). The most productive way to encourage development of HOT in students is through use of the Problem-based Learning (PBL) model. This model organizes learning by solving corresponding problems relative to study courses. Students are directed…
A Computationally Efficient Method for Polyphonic Pitch Estimation
Directory of Open Access Journals (Sweden)
Ruohua Zhou
2009-01-01
Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Secure Computation, I/O-Efficient Algorithms and Distributed Signatures
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Kölker, Jonas; Toft, Tomas
2012-01-01
values of form r, gr for random secret-shared r ∈ ℤq and gr in a group of order q. This costs a constant number of exponentiation per player per value generated, even if less than n/3 players are malicious. This can be used for efficient distributed computing of Schnorr signatures. We further develop...... the technique so we can sign secret data in a distributed fashion at essentially the same cost....
Special issue of Higher-Order and Symbolic Computation
DEFF Research Database (Denmark)
solicited from papers presented at ASIAPEPM 02, the 2002 SIGPLAN Symposium on Partial Evaluation and Semantics-Based Program Manipulation [1]. The four articles were subjected to the usual process of journal reviewing. "Cost-Augmented Partial Evaluation of Functional Logic Programs" extends previous......The present issue is dedicated to Partial Evaluation and Semantics-Based Program Manipulation. Its first two articles were solicited from papers presented at PEPM 02, the 2002 ACMSIGPLANWorkshop on Partial Evaluation and Semantics-Based Program Manipulation [2], and its last two articles were...... narrowing-driven techniques of partial evaluation for functional-logic programs by the inclusion of abstract computation costs into the partial-evaluation process. A preliminary version of this work was presented at PEPM 02. "Specialization Scenarios: A Pragmatic Approach to Declaring Program Specialization...
Convolutional networks for fast, energy-efficient neuromorphic computing.
Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S
2016-10-11
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
Improving computational efficiency of Monte Carlo simulations with variance reduction
International Nuclear Information System (INIS)
Turner, A.; Davis, A.
2013-01-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Efficient MATLAB computations with sparse and factored tensors.
Energy Technology Data Exchange (ETDEWEB)
Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)
2006-12-01
In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Convolutional networks for fast, energy-efficient neuromorphic computing
Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.
2016-01-01
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489
Higher-Order and Symbolic Computation. LISP and Symbolic Computationditorial
DEFF Research Database (Denmark)
Danvy, Olivier; Dybvig, R. Kent; Lawall, Julia
2008-01-01
system for these static checks and a corresponding type-inference algorithm. In "An Investigation of Jones Optimality and BTI-Universal Specializers," Robert Glueck establishes a connection between Jones optimal-program specializers and binding-time improvers. This article completes a study started...... at ASIA-PEPM 2002 [1]. In "On the Implementation of Automatic Differentiation Tools," Christian H. Bischof, Paul D. Hovland, and Boyana Norris present a survey of some recent tools for the Automatic Differentiation technology (concentrating mainly on ADIC, ADIFOR and sketching XAIF). They also offer...... for removing tuple constructions and tuple selections. This technique solves the problem of efficiently passing tuples to polymorphic functions by avoiding extra memory operations in selecting components of the tuple....
Janetzke, David C.; Murthy, Durbha V.
1991-01-01
Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.
An energy-efficient failure detector for vehicular cloud computing.
Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin
2018-01-01
Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.
Power-Efficient Computing: Experiences from the COSA Project
Directory of Open Access Journals (Sweden)
Daniele Cesini
2017-01-01
Full Text Available Energy consumption is today one of the most relevant issues in operating HPC systems for scientific applications. The use of unconventional computing systems is therefore of great interest for several scientific communities looking for a better tradeoff between time-to-solution and energy-to-solution. In this context, the performance assessment of processors with a high ratio of performance per watt is necessary to understand how to realize energy-efficient computing systems for scientific applications, using this class of processors. Computing On SOC Architecture (COSA is a three-year project (2015–2017 funded by the Scientific Commission V of the Italian Institute for Nuclear Physics (INFN, which aims to investigate the performance and the total cost of ownership offered by computing systems based on commodity low-power Systems on Chip (SoCs and high energy-efficient systems based on GP-GPUs. In this work, we present the results of the project analyzing the performance of several scientific applications on several GPU- and SoC-based systems. We also describe the methodology we have used to measure energy performance and the tools we have implemented to monitor the power drained by applications while running.
Global discriminative learning for higher-accuracy computational gene prediction.
Directory of Open Access Journals (Sweden)
Axel Bernal
2007-03-01
Full Text Available Most ab initio gene predictors use a probabilistic sequence model, typically a hidden Markov model, to combine separately trained models of genomic signals and content. By combining separate models of relevant genomic features, such gene predictors can exploit small training sets and incomplete annotations, and can be trained fairly efficiently. However, that type of piecewise training does not optimize prediction accuracy and has difficulty in accounting for statistical dependencies among different parts of the gene model. With genomic information being created at an ever-increasing rate, it is worth investigating alternative approaches in which many different types of genomic evidence, with complex statistical dependencies, can be integrated by discriminative learning to maximize annotation accuracy. Among discriminative learning methods, large-margin classifiers have become prominent because of the success of support vector machines (SVM in many classification tasks. We describe CRAIG, a new program for ab initio gene prediction based on a conditional random field model with semi-Markov structure that is trained with an online large-margin algorithm related to multiclass SVMs. Our experiments on benchmark vertebrate datasets and on regions from the ENCODE project show significant improvements in prediction accuracy over published gene predictors that use intrinsic features only, particularly at the gene level and on genes with long introns.
Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping
2015-06-24
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping; Huang, Jianhua Z.; Zhang, Nan
2015-01-01
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
Universities UK, 2011
2011-01-01
Effectiveness, efficiency and value for money are central concerns for the higher education sector. In England, decisions made by the current Government will effect a radical change in the funding for teaching. Institutions will be managing a reduction in public funding for teaching and the transition to the new system of graduate contributions,…
Ismail, Mohd Nazri
2009-01-01
In 21st century, convergences of technologies and services in heterogeneous environment have contributed multi-traffic. This scenario will affect computer network on learning system in higher educational Institutes. Implementation of various services can produce different types of content and quality. Higher educational institutes should have a good computer network infrastructure to support usage of various services. The ability of computer network should consist of i) higher bandwidth; ii) ...
Improving robustness and computational efficiency using modern C++
International Nuclear Information System (INIS)
Paterno, M; Kowalkowski, J; Green, C
2014-01-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
Perspective: Memcomputing: Leveraging memory and physics to compute efficiently
Di Ventra, Massimiliano; Traversa, Fabio L.
2018-05-01
It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs
Directory of Open Access Journals (Sweden)
Mohammed F. Hadi
2012-01-01
Full Text Available It is argued here that more accurate though more compute-intensive alternate algorithms to certain computational methods which are deemed too inefficient and wasteful when implemented within serial codes can be more efficient and cost-effective when implemented in parallel codes designed to run on today's multicore and many-core environments. This argument is most germane to methods that involve large data sets with relatively limited computational density—in other words, algorithms with small ratios of floating point operations to memory accesses. The examples chosen here to support this argument represent a variety of high-order finite-difference time-domain algorithms. It will be demonstrated that a three- to eightfold increase in floating-point operations due to higher-order finite-differences will translate to only two- to threefold increases in actual run times using either graphical or central processing units of today. It is hoped that this argument will convince researchers to revisit certain numerical techniques that have long been shelved and reevaluate them for multicore usability.
Efficient Skyline Computation in Structured Peer-to-Peer Systems
DEFF Research Database (Denmark)
Cui, Bin; Chen, Lijiang; Xu, Linhao
2009-01-01
An increasing number of large-scale applications exploit peer-to-peer network architecture to provide highly scalable and flexible services. Among these applications, data management in peer-to-peer systems is one of the interesting domains. In this paper, we investigate the multidimensional...... skyline computation problem on a structured peer-to-peer network. In order to achieve low communication cost and quick response time, we utilize the iMinMax(\\theta ) method to transform high-dimensional data to one-dimensional value and distribute the data in a structured peer-to-peer network called BATON....... Thereafter, we propose a progressive algorithm with adaptive filter technique for efficient skyline computation in this environment. We further discuss some optimization techniques for the algorithm, and summarize the key principles of our algorithm into a query routing protocol with detailed analysis...
Adding computationally efficient realism to Monte Carlo turbulence simulation
Campbell, C. W.
1985-01-01
Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.
Reducing barriers to energy efficiency in the German higher education sector. Final report
Energy Technology Data Exchange (ETDEWEB)
Schleich, J.; Boede, U.
2000-12-01
This report describes the empirical research into barriers to energy efficiency in the German higher education (HE) sector. It is one of nine such reports in the BARRIERS project. The report contains description and analysis of six case studies of energy management in German universities. The results are analysed using the theoretical framework developed for the BARRIERS project (Sorrell et al., 2000). The report also provides brief recommendations on how these barriers to the rational use of energy (RUE) may be overcome and how energy efficiency within the sector may be improved. The results of the study for the higher education sector in Germany are summarised in this executive summary under the following headings: - Characterising the higher education sector; - Case studies of energy management in the German higher education sector; - Evidence of barriers in the German higher education sector; - The role of energy service companies in the higher education sector; - Policy implications. (orig.)
Reducing barriers to energy efficiency in the German higher education sector. Executive summary
Energy Technology Data Exchange (ETDEWEB)
Schleich, J.; Boede, U.
2000-12-01
This report describes the empirical research into barriers to energy efficiency in the German higher education (HE) sector. It is one of nine such reports in the BARRIERS project. The report contains description and analysis of six case studies of energy management in German universities. The results are analysed using the theoretical framework developed for the BARRIERS project (Sorrell et al., 2000). The report also provides brief recommendations on how these barriers to the rational use of energy (RUE) may be overcome and how energy efficiency within the sector may be improved. The results of the study for the higher education sector in Germany are summarised in this executive summary under the following headings: - Characterising the higher education sector; - Case studies of energy management in the German higher education sector; - Evidence of barriers in the German higher education sector; - The role of energy service companies in the higher education sector; - Policy implications. (orig.)
Graphics processor efficiency for realization of rapid tabular computations
International Nuclear Information System (INIS)
Dudnik, V.A.; Kudryavtsev, V.I.; Us, S.A.; Shestakov, M.V.
2016-01-01
Capabilities of graphics processing units (GPU) and central processing units (CPU) have been investigated for realization of fast-calculation algorithms with the use of tabulated functions. The realization of tabulated functions is exemplified by the GPU/CPU architecture-based processors. Comparison is made between the operating efficiencies of GPU and CPU, employed for tabular calculations at different conditions of use. Recommendations are formulated for the use of graphical and central processors to speed up scientific and engineering computations through the use of tabulated functions
Efficient quantum algorithm for computing n-time correlation functions.
Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E
2014-07-11
We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
Computationally Efficient and Noise Robust DOA and Pitch Estimation
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2016-01-01
Many natural signals, such as voiced speech and some musical instruments, are approximately periodic over short intervals. These signals are often described in mathematics by the sum of sinusoids (harmonics) with frequencies that are proportional to the fundamental frequency, or pitch. In sensor...... a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...
Efficient Use of Preisach Hysteresis Model in Computer Aided Design
Directory of Open Access Journals (Sweden)
IONITA, V.
2013-05-01
Full Text Available The paper presents a practical detailed analysis regarding the use of the classical Preisach hysteresis model, covering all the steps, from measuring the necessary data for the model identification to the implementation in a software code for Computer Aided Design (CAD in Electrical Engineering. An efficient numerical method is proposed and the hysteresis modeling accuracy is tested on magnetic recording materials. The procedure includes the correction of the experimental data, which are used for the hysteresis model identification, taking into account the demagnetizing effect for the sample that is measured in an open-circuit device (a vibrating sample magnetometer.
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
Energy Technology Data Exchange (ETDEWEB)
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm
Investigating the Multi-memetic Mind Evolutionary Computation Algorithm Efficiency
Directory of Open Access Journals (Sweden)
M. K. Sakharov
2017-01-01
Full Text Available In solving practically significant problems of global optimization, the objective function is often of high dimensionality and computational complexity and of nontrivial landscape as well. Studies show that often one optimization method is not enough for solving such problems efficiently - hybridization of several optimization methods is necessary.One of the most promising contemporary trends in this field are memetic algorithms (MA, which can be viewed as a combination of the population-based search for a global optimum and the procedures for a local refinement of solutions (memes, provided by a synergy. Since there are relatively few theoretical studies concerning the MA configuration, which is advisable for use to solve the black-box optimization problems, many researchers tend just to adaptive algorithms, which for search select the most efficient methods of local optimization for the certain domains of the search space.The article proposes a multi-memetic modification of a simple SMEC algorithm, using random hyper-heuristics. Presents the software algorithm and memes used (Nelder-Mead method, method of random hyper-sphere surface search, Hooke-Jeeves method. Conducts a comparative study of the efficiency of the proposed algorithm depending on the set and the number of memes. The study has been carried out using Rastrigin, Rosenbrock, and Zakharov multidimensional test functions. Computational experiments have been carried out for all possible combinations of memes and for each meme individually.According to results of study, conducted by the multi-start method, the combinations of memes, comprising the Hooke-Jeeves method, were successful. These results prove a rapid convergence of the method to a local optimum in comparison with other memes, since all methods perform the fixed number of iterations at the most.The analysis of the average number of iterations shows that using the most efficient sets of memes allows us to find the optimal
Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets
Sun, Ying
2014-11-07
For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.
Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets
Sun, Ying; Stein, Michael L.
2014-01-01
For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.
Computationally efficient implementation of combustion chemistry in parallel PDF calculations
International Nuclear Information System (INIS)
Lu Liuyan; Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.
2009-01-01
In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f m pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel
A Computational Framework for Efficient Low Temperature Plasma Simulations
Verma, Abhishek Kumar; Venkattraman, Ayyaswamy
2016-10-01
Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.
Gentzsch, Wolfgang
1986-01-01
The GAMM Committee for Numerical Methods in Fluid Mechanics organizes workshops which should bring together experts of a narrow field of computational fluid dynamics (CFD) to exchange ideas and experiences in order to speed-up the development in this field. In this sense it was suggested that a workshop should treat the solution of CFD problems on vector computers. Thus we organized a workshop with the title "The efficient use of vector computers with emphasis on computational fluid dynamics". The workshop took place at the Computing Centre of the University of Karlsruhe, March 13-15,1985. The participation had been restricted to 22 people of 7 countries. 18 papers have been presented. In the announcement of the workshop we wrote: "Fluid mechanics has actively stimulated the development of superfast vector computers like the CRAY's or CYBER 205. Now these computers on their turn stimulate the development of new algorithms which result in a high degree of vectorization (sca1ar/vectorized execution-time). But w...
Efficient universal computing architectures for decoding neural activity.
Directory of Open Access Journals (Sweden)
Benjamin I Rapoport
Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion
An Efficient Higher-Order Quasilinearization Method for Solving Nonlinear BVPs
Directory of Open Access Journals (Sweden)
Eman S. Alaidarous
2013-01-01
Full Text Available In this research paper, we present higher-order quasilinearization methods for the boundary value problems as well as coupled boundary value problems. The construction of higher-order convergent methods depends on a decomposition method which is different from Adomain decomposition method (Motsa and Sibanda, 2013. The reported method is very general and can be extended to desired order of convergence for highly nonlinear differential equations and also computationally superior to proposed iterative method based on Adomain decomposition because our proposed iterative scheme avoids the calculations of Adomain polynomials and achieves the same computational order of convergence as authors have claimed in Motsa and Sibanda, 2013. In order to check the validity and computational performance, the constructed iterative schemes are also successfully applied to bifurcation problems to calculate the values of critical parameters. The numerical performance is also tested for one-dimension Bratu and Frank-Kamenetzkii equations.
The economical efficiency of private investments in higher education in Russia
Directory of Open Access Journals (Sweden)
Elena Maksyutina
2011-12-01
Full Text Available The article investigates the economical efficiency of investments in higher education in modern conditions of Russia. The beginning of the article includes a characteristic of the existing empiric research concerning the efficiency of investments in human capital assets. Further the author of the article introduces the results of pay off calculation of private investments in higher education. The result of the research was that in modern conditions of Russia investments in higher education are exceedingly advantageous. High norms of higher education feedback and short period of pay off of these investments explain the reasons of continuously growing demand for it on the part of the population, especially young people. The article proves that the level of population education in Russia is quite high, however accumulated human capital asset is used insufficiently effective. Many people with higher education are forced to take jobs not requiring higher education. Sharp shift in educational behavior of Russian people raises new demands to labor market. Graduates of higher educational institutions, appearing on a labor market, form qualitatively different demands towards it. But tempo of Russian economics development today can not provide job positions for all graduates of higher educational institutions. That is why structural change of economics is needed.
CIGS cells with metallized front contact: Longer cells and higher efficiency
Deelen, J. van; Frijters, C.
2017-01-01
We have investigated the benefit of a patterned metallization on top of a transparent conductive oxide in CIGS thin-film solar panels. It was found that cells with a grid have a higher efficiency compared to cells with only a TCO. This was observed for all cell lengths used. Furthermore, metallic
Efficiency, Costs, Rankings and Heterogeneity: The Case of US Higher Education
Agasisti, Tommaso; Johnes, Geraint
2015-01-01
Among the major trends in the higher education (HE) sector, the development of rankings as a policy and managerial tool is of particular relevance. However, despite the diffusion of these instruments, it is still not clear how they relate with traditional performance measures, like unit costs and efficiency scores. In this paper, we estimate a…
Directory of Open Access Journals (Sweden)
Robin H. Kay
2011-04-01
Full Text Available Because of decreased prices, increased convenience, and wireless access, an increasing number of college and university students are using laptop computers in their classrooms. This recent trend has forced instructors to address the educational consequences of using these mobile devices. The purpose of the current study was to analyze and assess beneficial and challenging laptop behaviours in higher education classrooms. Both quantitative and qualitative data were collected from 177 undergraduate university students (89 males, 88 females. Key benefits observed include note-taking activities, in-class laptop-based academic tasks, collaboration, increased focus, improved organization and efficiency, and addressing special needs. Key challenges noted include other student’s distracting laptop behaviours, instant messaging, surfing the web, playing games, watching movies, and decreased focus. Nearly three-quarters of the students claimed that laptops were useful in supporting their academic experience. Twice as many benefits were reported compared to challenges. It is speculated that the integration of meaningful laptop activities is a critical determinant of benefits and challenges experienced in higher education classrooms.
Energy Technology Data Exchange (ETDEWEB)
Brown, K. A. [Brookhaven National Lab. (BNL), Upton, NY (United States); Schoefer, V. [Brookhaven National Lab. (BNL), Upton, NY (United States); Tomizawa, M. [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan)
2017-03-09
The new accelerator complex at J-PARC will operate with both high energy and very high intensity proton beams. With a design slow extraction efficiency of greater than 99% this facility will still be depositing significant beam power onto accelerator components [2]. To achieve even higher efficiencies requires some new ideas. The design of the extraction system and the accelerator lattice structure leaves little room for improvement using conventional techniques. In this report we will present one method for improving the slow extraction efficiency at J-PARC by adding duodecapoles or octupoles to the slow extraction system. We will review the theory of resonant extraction, describe simulation methods, and present the results of detailed simulations. From our investigations we find that we can improve extraction efficiency and thereby reduce the level of residual activation in the accelerator components and surrounding shielding.
Can More Environmental Information Disclosure Lead to Higher Eco-Efficiency? Evidence from China
Directory of Open Access Journals (Sweden)
Yantuan Yu
2018-02-01
Full Text Available The present paper investigates the impact of pollution information transparency index (PITI on eco-efficiency using a novel panel dataset covering 109 key environmental protection prefecture-level cities in China over the period 2008–2015. We apply an extended data envelopment analysis (DEA model, simultaneously incorporating metafrontier, undesirable outputs and super efficiency into slack-based measure (Meta-US-SBM to estimate eco-efficiency. Then, the bootstrap Granger causality approach is utilized to test the unidirectional Granger causal relationship running from PITI to eco-efficiency. Results of DEA model show that there exist significant spatiotemporal disparities of eco-efficiency, on average, the eco-efficiency in eastern region is relative higher than those of central/western region. Estimates of ordinary least square (OLS method, quantile regression, and spatial Durbin model document that the evidence of an inverted-U-shaped relation between PITI and eco-efficiency is supported, and the turning points vary from 0.3370 to 0.4540 with different model specifications. Finally, supplementary analysis of panel threshold model also supports the robust findings. Policy implications are presented based on the empirical results.
High School Computer Science Education Paves the Way for Higher Education: The Israeli Case
Armoni, Michal; Gal-Ezer, Judith
2014-01-01
The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to…
Academic Computing Facilities and Services in Higher Education--A Survey.
Warlick, Charles H.
1986-01-01
Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…
Smoothing the payoff for efficient computation of Basket option prices
Bayer, Christian
2017-07-22
We consider the problem of pricing basket options in a multivariate Black–Scholes or Variance-Gamma model. From a numerical point of view, pricing such options corresponds to moderate and high-dimensional numerical integration problems with non-smooth integrands. Due to this lack of regularity, higher order numerical integration techniques may not be directly available, requiring the use of methods like Monte Carlo specifically designed to work for non-regular problems. We propose to use the inherent smoothing property of the density of the underlying in the above models to mollify the payoff function by means of an exact conditional expectation. The resulting conditional expectation is unbiased and yields a smooth integrand, which is amenable to the efficient use of adaptive sparse-grid cubature. Numerical examples indicate that the high-order method may perform orders of magnitude faster than Monte Carlo or Quasi Monte Carlo methods in dimensions up to 35.
Efficiently computing exact geodesic loops within finite steps.
Xin, Shi-Qing; He, Ying; Fu, Chi-Wing
2012-06-01
Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.
Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.
2014-01-01
SUMMARY: We compare the computational efficiency of isogeometric Galerkin and collocation methods for partial differential equations in the asymptotic regime. We define a metric to identify when numerical experiments have reached this regime. We then apply these ideas to analyze the performance of different isogeometric discretizations, which encompass C0 finite element spaces and higher-continuous spaces. We derive convergence and cost estimates in terms of the total number of degrees of freedom and then perform an asymptotic numerical comparison of the efficiency of these methods applied to an elliptic problem. These estimates are derived assuming that the underlying solution is smooth, the full Gauss quadrature is used in each non-zero knot span and the numerical solution of the discrete system is found using a direct multi-frontal solver. We conclude that under the assumptions detailed in this paper, higher-continuous basis functions provide marginal benefits.
Collier, Nathan
2014-09-17
SUMMARY: We compare the computational efficiency of isogeometric Galerkin and collocation methods for partial differential equations in the asymptotic regime. We define a metric to identify when numerical experiments have reached this regime. We then apply these ideas to analyze the performance of different isogeometric discretizations, which encompass C0 finite element spaces and higher-continuous spaces. We derive convergence and cost estimates in terms of the total number of degrees of freedom and then perform an asymptotic numerical comparison of the efficiency of these methods applied to an elliptic problem. These estimates are derived assuming that the underlying solution is smooth, the full Gauss quadrature is used in each non-zero knot span and the numerical solution of the discrete system is found using a direct multi-frontal solver. We conclude that under the assumptions detailed in this paper, higher-continuous basis functions provide marginal benefits.
Directory of Open Access Journals (Sweden)
Xianmei Wang
2017-10-01
Full Text Available Sustainability issues in higher educational institutions’ (HEIs research, especially in the social science field, have attracted increasing levels of attention in higher education administration in recent decades as HEIs are confronted with a growing pressure worldwide to increase the efficiency of their research activities under a limited volume and relatively equitable division of public funding resources. This paper introduces a theoretical analysis framework based on a data envelopment analysis, separating the social science research process into a foundation stage and a construction stage, and then projecting each HEI into certain quadrants to form several clusters according to their overall and stage efficiencies and corresponding Malmquist Productivity Indices. Furthermore, the interfaces are formed in each cluster as feasible potential improvement directions. The empirical results in detail are demonstrated from a data set of Chinese HEIs in Jiangsu Province over the Twelfth Five-Year period as offering a closer approximation to the “China social science research best practice”.
Why do French civil-law countries have higher levels of financial efficiency?
Asongu Simplice
2011-01-01
The dominance of English common-law countries in prospects for financial development in the legal-origins debate has been debunked by recent findings. Using exchange rate regimes and economic/monetary integration oriented hypotheses, this paper proposes an “inflation uncertainty theory” in providing theoretical justification and empirical validity as to why French civil-law countries have higher levels of financial allocation efficiency. Inflation uncertainty, typical of floating exchange rat...
Computationally efficient model predictive control algorithms a neural network approach
Ławryńczuk, Maciej
2014-01-01
This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: · A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. · Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. · The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). · The MPC algorithms with neural approximation with no on-line linearization. · The MPC algorithms with guaranteed stability and robustness. · Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...
Computationally Efficient Nonlinear Bell Inequalities for Quantum Networks
Luo, Ming-Xing
2018-04-01
The correlations in quantum networks have attracted strong interest with new types of violations of the locality. The standard Bell inequalities cannot characterize the multipartite correlations that are generated by multiple sources. The main problem is that no computationally efficient method is available for constructing useful Bell inequalities for general quantum networks. In this work, we show a significant improvement by presenting new, explicit Bell-type inequalities for general networks including cyclic networks. These nonlinear inequalities are related to the matching problem of an equivalent unweighted bipartite graph that allows constructing a polynomial-time algorithm. For the quantum resources consisting of bipartite entangled pure states and generalized Greenberger-Horne-Zeilinger (GHZ) states, we prove the generic nonmultilocality of quantum networks with multiple independent observers using new Bell inequalities. The violations are maximal with respect to the presented Tsirelson's bound for Einstein-Podolsky-Rosen states and GHZ states. Moreover, these violations hold for Werner states or some general noisy states. Our results suggest that the presented Bell inequalities can be used to characterize experimental quantum networks.
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
Post-weaning feed efficiency decreased in progeny of higher milk yielding beef cows.
Mulliniks, J T; Edwards, S R; Hobbs, J D; McFarlane, Z D; Cope, E R
2018-02-01
Current trends in the beef industry focus on selecting production traits with the purpose of maximizing calf weaning weight; however, such traits may ultimately decrease overall post-weaning productivity. Therefore, the objective of this study was to evaluate the effects of actual milk yield in mature beef cows on their offspring's dry matter intake (DMI), BW, average daily gain, feed conversion ratio (FCR) and residual feed intake (RFI) during a ~75-day backgrounding feeding trial. A period of 24-h milk production was measured with a modified weigh-suckle-weigh technique using a milking machine. After milking, cows were retrospectively classified as one of three milk yield groups: Lower (6.57±1.21 kg), Moderate (9.02±0.60 kg) or Higher (11.97±1.46 kg). Calves from Moderate and Higher milk yielding dams had greater (Pfeeding phase; however, day 75 BW were not different (P=0.36) between Lower and Moderate calves. Body weight gain was greater (P=0.05) for Lower and Moderate calves from the day 0 BW to day 35 BW compared with Higher calves. Overall DMI was lower (P=0.03) in offspring from Lower and Moderate cows compared with their Higher milking counterparts. With the decreased DMI, FCR was lower (P=0.03) from day 0 to day 35 in calves from Lower and Moderate milk yielding dams. In addition, overall FCR was lower (P=0.02) in calves from Lower and Moderate milk yielding dams compared with calves from Higher milk yielding dams. However, calving of Lower milk yielding dams had an increased (P=0.04) efficiency from a negative RFI value compared with calves from Moderate and Higher milking dams. Results from this study suggest that increased milk production in beef cows decreases feed efficiency during a 75-day post-weaning, backgrounding period of progeny.
Balancing Accuracy and Computational Efficiency for Ternary Gas Hydrate Systems
White, M. D.
2011-12-01
phase transitions. This paper describes and demonstrates a numerical solution scheme for ternary hydrate systems that seeks a balance between accuracy and computational efficiency. This scheme uses a generalize cubic equation of state, functional forms for the hydrate equilibria and cage occupancies, variable switching scheme for phase transitions, and kinetic exchange of hydrate formers (i.e., CH4, CO2, and N2) between the mobile phases (i.e., aqueous, liquid CO2, and gas) and hydrate phase. Accuracy of the scheme will be evaluated by comparing property values and phase equilibria against experimental data. Computational efficiency of the scheme will be evaluated by comparing the base scheme against variants. The application of interest will the production of a natural gas hydrate deposit from a geologic formation, using the guest molecule exchange process; where, a mixture of CO2 and N2 are injected into the formation. During the guest-molecule exchange, CO2 and N2 will predominately replace CH4 in the large and small cages of the sI structure, respectively.
Efficient universal quantum channel simulation in IBM's cloud quantum computer
Wei, Shi-Jie; Xin, Tao; Long, Gui-Lu
2018-07-01
The study of quantum channels is an important field and promises a wide range of applications, because any physical process can be represented as a quantum channel that transforms an initial state into a final state. Inspired by the method of performing non-unitary operators by the linear combination of unitary operations, we proposed a quantum algorithm for the simulation of the universal single-qubit channel, described by a convex combination of "quasi-extreme" channels corresponding to four Kraus operators, and is scalable to arbitrary higher dimension. We demonstrated the whole algorithm experimentally using the universal IBM cloud-based quantum computer and studied the properties of different qubit quantum channels. We illustrated the quantum capacity of the general qubit quantum channels, which quantifies the amount of quantum information that can be protected. The behavior of quantum capacity in different channels revealed which types of noise processes can support information transmission, and which types are too destructive to protect information. There was a general agreement between the theoretical predictions and the experiments, which strongly supports our method. By realizing the arbitrary qubit channel, this work provides a universally- accepted way to explore various properties of quantum channels and novel prospect for quantum communication.
Computationally efficient models of neuromuscular recruitment and mechanics.
Song, D; Raphael, G; Lan, N; Loeb, G E
2008-06-01
We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.
Computationally efficient models of neuromuscular recruitment and mechanics
Song, D.; Raphael, G.; Lan, N.; Loeb, G. E.
2008-06-01
We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.
Directory of Open Access Journals (Sweden)
hamid reza bazi
2017-12-01
Full Text Available Cloud computing is a new technology that considerably helps Higher Education Institutions (HEIs to develop and create competitive advantage with inherent characteristics such as flexibility, scalability, accessibility, reliability, fault tolerant and economic efficiency. Due to the numerous advantages of cloud computing, and in order to take advantage of cloud computing infrastructure, services of universities and HEIs need to migrate to the cloud. However, this transition involves many challenges, one of which is lack or shortage of appropriate architecture for migration to the technology. Using a reliable architecture for migration ensures managers to mitigate risks in the cloud computing technology. Therefore, organizations always search for suitable cloud computing architecture. In previous studies, these important features have received less attention and have not been achieved in a comprehensive way. The aim of this study is to use a meta-synthesis method for the first time to analyze the previously published studies and to suggest appropriate hybrid cloud migration architecture (IUHEC. We reviewed many papers from relevant journals and conference proceedings. The concepts extracted from these papers are classified to related categories and sub-categories. Then, we developed our proposed hybrid architecture based on these concepts and categories. The proposed architecture was validated by a panel of experts and Lawshe’s model was used to determine the content validity. Due to its innovative yet user-friendly nature, comprehensiveness, and high security, this architecture can help HEIs have an effective migration to cloud computing environment.
International Nuclear Information System (INIS)
Hall, M.L.; Davis, A.B.
2005-01-01
Accurate modeling of radiative energy transport through cloudy atmospheres is necessary for both climate modeling with GCMs (Global Climate Models) and remote sensing. Previous modeling efforts have taken advantage of extreme aspect ratios (cells that are very wide horizontally) by assuming a 1-D treatment vertically - the Independent Column Approximation (ICA). Recent attempts to resolve radiation transport through the clouds have drastically changed the aspect ratios of the cells, moving them closer to unity, such that the ICA model is no longer valid. We aim to provide a higher-fidelity atmospheric radiation transport model which increases accuracy while maintaining efficiency. To that end, this paper describes the development of an efficient 3-D-capable radiation code that can be easily integrated into cloud resolving models as an alternative to the resident 1-D model. Applications to test cases from the Intercomparison of 3-D Radiation Codes (I3RC) protocol are shown
The green computing book tackling energy efficiency at large scale
Feng, Wu-chun
2014-01-01
Low-Power, Massively Parallel, Energy-Efficient Supercomputers The Blue Gene TeamCompiler-Driven Energy Efficiency Mahmut Kandemir and Shekhar Srikantaiah An Adaptive Run-Time System for Improving Energy Efficiency Chung-Hsing Hsu, Wu-chun Feng, and Stephen W. PooleEnergy-Efficient Multithreading through Run-Time Adaptation Exploring Trade-Offs between Energy Savings and Reliability in Storage Systems Ali R. Butt, Puranjoy Bhattacharjee, Guanying Wang, and Chris GniadyCross-Layer Power Management Zhikui Wang and Parthasarathy Ranganathan Energy-Efficient Virtualized Systems Ripal Nathuji and K
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
Directory of Open Access Journals (Sweden)
Pinku Debnath
2017-03-01
Full Text Available Exergy losses during the combustion process, heat transfer, and fuel utilization play a vital role in the analysis of the exergetic efficiency of combustion process. Detonation is thermodynamically more efficient than deflagration mode of combustion. Detonation combustion technology inside the pulse detonation engine using hydrogen as a fuel is energetic propulsion system for next generation. In this study, the main objective of this work is to quantify the exergetic efficiency of hydrogen–air combustion for deflagration and detonation combustion process. Further detonation parameters are calculated using 0.25, 0.35, and 0.55 of H2 mass concentrations in the combustion process. The simulations have been performed for converging the solution using commercial computational fluid dynamics package Ansys Fluent solver. The details of combustion physics in chemical reacting flows of hydrogen–air mixture in two control volumes were simulated using species transport model with eddy dissipation turbulence chemistry interaction. From these simulations it was observed that exergy loss in the deflagration combustion process is higher in comparison to the detonation combustion process. The major observation was that pilot fuel economy for the two combustion processes and augmentation of exergetic efficiencies are better in the detonation combustion process. The maximum exergetic efficiency of 55.12%, 53.19%, and 23.43% from deflagration combustion process and from detonation combustion process, 67.55%, 57.49%, and 24.89%, are obtained from aforesaid H2 mass fraction. It was also found that for lesser fuel mass fraction higher exergetic efficiency was observed.
Economic efficiency of e-learning in higher education: An industrial approach
Directory of Open Access Journals (Sweden)
Jordi Vilaseca
2008-07-01
Full Text Available Little work has been yet done to analyse if e-learning is an efficiency way in economic terms to produce higher education, especially because there are not available data in official statistics. Despite of these important constrains, this paper aims to contribute to the study of economic efficiency of e-learning through the analysis of a sample of e-learning universities during a period of time (1997-2002. We have wanted to obtain some empirical evidence to understand if e-learning is a feasible model of providing education for universities and which are the variables that allow for feasibility attainment. The main findings are: 1 that the rise of the number of students enrolled is consistent with increasing labour productivity rates; 2 that cost labour savings are explained by the improvement of universities’ economic efficiency (or total factor productivity; and 3 that improvement of total factor productivity in e-learning production is due to the attainment of scale economies, but also to two organisational innovations: outsourcing processes that leads to the increase of variable costs consistent with decreasing marginal costs, and the sharing of assets’ control and use that allow for a rise in assets rotation.
Efficient computation in adaptive artificial spiking neural networks
D. Zambrano (Davide); R.B.P. Nusselder (Roeland); H.S. Scholte; S.M. Bohte (Sander)
2017-01-01
textabstractArtificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of
Directory of Open Access Journals (Sweden)
Sarim Ahmed
2018-06-01
Full Text Available A venturi scrubber is an important element of Filtered Containment Venting System (FCVS for the removal of aerosols in contaminated air. The present work involves computational fluid dynamics (CFD study of dust particle removal efficiency of a venturi scrubber operating in self-priming mode using ANSYS CFX. Titanium oxide (TiO2 particles having sizes of 1 micron have been taken as dust particles. CFD methodology to simulate the venturi scrubber has been first developed. The cascade atomization and breakup (CAB model has been used to predict deformation of water droplets, whereas the Eulerian–Lagrangian approach has been used to handle multiphase flow involving air, dust, and water. The developed methodology has been applied to simulate venturi scrubber geometry taken from the literature. Dust particle removal efficiency has been calculated for forced feed operation of venturi scrubber and found to be in good agreement with the results available in the literature. In the second part, venturi scrubber along with a tank has been modeled in CFX, and transient simulations have been performed to study self-priming phenomenon. Self-priming has been observed by plotting the velocity vector fields of water. Suction of water in the venturi scrubber occurred due to the difference between static pressure in the venturi scrubber and the hydrostatic pressure of water inside the tank. Dust particle removal efficiency has been calculated for inlet air velocities of 1 m/s and 3 m/s. It has been observed that removal efficiency is higher in case of higher inlet air velocity. Keywords: Computational Fluid Dynamics, Dust Particles, Filtered Containment Venting System, Self-priming Venturi Scrubber, Venturi Scrubber
Semushin, I. V.; Tsyganova, J. V.; Ugarov, V. V.; Afanasova, A. I.
2018-01-01
Russian higher education institutions' tradition of teaching large-enrolled classes is impairing student striving for individual prominence, one-upmanship, and hopes for originality. Intending to converting these drawbacks into benefits, a Project-Centred Education Model (PCEM) has been introduced to deliver Computational Mathematics and…
Kankaanpää, Irja; Isomäki, Hannakaisa
2013-01-01
This paper reviews research literature on the production and commercialization of IT-enabled higher education in computer science. Systematic literature review (SLR) was carried out in order to find out to what extent this area has been studied, more specifically how much it has been studied and to what detail. The results of this paper make a…
Business Models of High Performance Computing Centres in Higher Education in Europe
Eurich, Markus; Calleja, Paul; Boutellier, Roman
2013-01-01
High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…
Semushin, I. V.; Tsyganova, J. V.; Ugarov, V. V.; Afanasova, A. I.
2018-05-01
Russian higher education institutions' tradition of teaching large-enrolled classes is impairing student striving for individual prominence, one-upmanship, and hopes for originality. Intending to converting these drawbacks into benefits, a Project-Centred Education Model (PCEM) has been introduced to deliver Computational Mathematics and Information Science courses. The model combines a Frontal Competitive Approach and a Project-Driven Learning (PDL) framework. The PDL framework has been developed by stating and solving three design problems: (i) enhance the diversity of project assignments on specific computation methods algorithmic approaches, (ii) balance similarity and dissimilarity of the project assignments, and (iii) develop a software assessment tool suitable for evaluating the technological maturity of students' project deliverables and thus reducing instructor's workload and possible overlook. The positive experience accumulated over 15 years shows that implementing the PCEM keeps students motivated to strive for success in rising to higher levels of their computational and software engineering skills.
Directory of Open Access Journals (Sweden)
Abidin Nur IzieAdiana
2017-01-01
Full Text Available The expansion of Higher Learning Institution (HLI is a global concerns on energy demand due to campus act like a small city. Intensive mode of operation of a building is correlated to the energy utilization. Improvement in the current energy efficiency is crucial effort to minimize the environmental effect through minimisation of energy in operation by retrofitting and upgrade the existing building system or components to be more efficient. Basically, there are three recommended steps for the improvement known as lean initiatives, green technology and clean energy in response to becoming zero energy solutions for building. The deliberation of this paper is aimed to highlight the criteria affecting in retrofitting of existing building in HLI with lean initiatives in order to achieve energy efficiency and reduction of energy comsumption. Attention is devoted to reviewing the lean energy retrofitting initiatives criteria for daylighting (side lighting, daylighting (skylight and glazing. The questionnaire survey was employed and distributed to the architects who has an expertise in green building design. Factor analysis was adopted as a method of analysis by using Principal Component with Varimax Rotation. The result is presented through summarizing the sub-criteria according to its importance with a factor loading 0.50 and above. The result found that majority of the criteria developed achieved the significant factor loading value and in accordance with the protocal of analysis. In conclusion the results from analysis of this paper assists the stakeholders in assessing the significant criteria based on the desired lean energy retrofitting initiatives and also provides a huge contribution for future planning improvement in existing buildings to become an energy efficient building.
Efficient technique for computational design of thermoelectric materials
Núñez-Valdez, Maribel; Allahyari, Zahed; Fan, Tao; Oganov, Artem R.
2018-01-01
Efficient thermoelectric materials are highly desirable, and the quest for finding them has intensified as they could be promising alternatives to fossil energy sources. Here we present a general first-principles approach to predict, in multicomponent systems, efficient thermoelectric compounds. The method combines a robust evolutionary algorithm, a Pareto multiobjective optimization, density functional theory and a Boltzmann semi-classical calculation of thermoelectric efficiency. To test the performance and reliability of our overall framework, we use the well-known system Bi2Te3-Sb2Te3.
Theoretical and methodological grounds of formation of the efficient system of higher education
Directory of Open Access Journals (Sweden)
Raevneva Elena V.
2013-03-01
Full Text Available The goal of the article lies in generalisation of the modern theoretical and methodological, methodical and instrumentation provision of building of efficient system of higher education. Analysis of literature on the problems of building educational systems shows that there is a theoretical and methodological and instrumentation level of study of this issue. The article considers a theoretical and methodological level of the study and specifies theories and philosophic schools, concepts, educational paradigms and scientific approaches used during formation of the educational paradigm. The article considers models of education and models and technologies of learning as instrumental provision. In the result of the analysis the article makes a conclusion that the humanistic paradigm, which is based on the competency building approach and which assumes the use of modern (innovation technologies of learning, should be in the foundation of reformation of the system of higher education. The prospect of further studies in this directions is formation of competences of potential specialists (graduates of higher educational establishments with consideration of requirements of employers and market in general.
Efficient one-way quantum computations for quantum error correction
International Nuclear Information System (INIS)
Huang Wei; Wei Zhaohui
2009-01-01
We show how to explicitly construct an O(nd) size and constant quantum depth circuit which encodes any given n-qubit stabilizer code with d generators. Our construction is derived using the graphic description for stabilizer codes and the one-way quantum computation model. Our result demonstrates how to use cluster states as scalable resources for many multi-qubit entangled states and how to use the one-way quantum computation model to improve the design of quantum algorithms.
Accurate and efficient computation of synchrotron radiation functions
International Nuclear Information System (INIS)
MacLeod, Allan J.
2000-01-01
We consider the computation of three functions which appear in the theory of synchrotron radiation. These are F(x)=x∫x∞K 5/3 (y) dy))F p (x)=xK 2/3 (x) and G p (x)=x 1/3 K 1/3 (x), where K ν denotes a modified Bessel function. Chebyshev series coefficients are given which enable the functions to be computed with an accuracy of up to 15 sig. figures
An efficient algorithm for nucleolus and prekernel computation in some classes of TU-games
Faigle, U.; Kern, Walter; Kuipers, J.
1998-01-01
We consider classes of TU-games. We show that we can efficiently compute an allocation in the intersection of the prekernel and the least core of the game if we can efficiently compute the minimum excess for any given allocation. In the case where the prekernel of the game contains exactly one core
Achieving higher efficiency of production through knowledge management via social capital management
Directory of Open Access Journals (Sweden)
Jana Plchová
2015-09-01
Full Text Available The article shows a new approach – how to reach higher efficiency in the production from knowledge management via managing social capital through measurement, motivation and stimulation. The test in a real company on the Toyota system implementation is de-scribed. The active involvement of people is an important part of the Toyota system success. This is obvious in Japan but creates a big problem in Europe. These problems were tested in order to answer the following questions: 1. Is it possible to measure the social system level before the application of the system?, 2. Is it possible to evaluate the necessary level of the social system for successful implementation in advance?, 3. Is it possible to cultivate the social system to the desired level? We try to answer all of these questions adopting the Kopčaj Spiral Management approach. The practical results on an existing company are presented together with managerial recommendations.
Limits on efficient computation in the physical world
Aaronson, Scott Joel
More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure
Computational methods for more fuel-efficient ship
Koren, B.
2008-01-01
The flow of water around a ship powered by a combustion engine is a key factor in the ship's fuel consumption. The simulation of flow patterns around ship hulls is therefore an important aspect of ship design. While lengthy computations are required for such simulations, research by Jeroen Wackers
Efficient computation in networks of spiking neurons: simulations and theory
International Nuclear Information System (INIS)
Natschlaeger, T.
1999-01-01
One of the most prominent features of biological neural systems is that individual neurons communicate via short electrical pulses, the so called action potentials or spikes. In this thesis we investigate possible mechanisms which can in principle explain how complex computations in spiking neural networks (SNN) can be performed very fast, i.e. within a few 10 milliseconds. Some of these models are based on the assumption that relevant information is encoded by the timing of individual spikes (temporal coding). We will also discuss a model which is based on a population code and still is able to perform fast complex computations. In their natural environment biological neural systems have to process signals with a rich temporal structure. Hence it is an interesting question how neural systems process time series. In this context we explore possible links between biophysical characteristics of single neurons (refractory behavior, connectivity, time course of postsynaptic potentials) and synapses (unreliability, dynamics) on the one hand and possible computations on times series on the other hand. Furthermore we describe a general model of computation that exploits dynamic synapses. This model provides a general framework for understanding how neural systems process time-varying signals. (author)
Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance
Happola, Juho
2017-09-19
Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.
An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.
Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei
2016-02-18
Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.
Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction
2016-05-11
applications of the Split Bregman method: Segmen- tation and surface reconstruction. J. Sci. Comput., 45:272– 293, October 2010. [17] Stephen Boyd and...Garcia, Gretchen Greene, Fabrizia Guglielmetti, Christopher Hanley, George Hawkins , et al. The second-generation guide star cata- log: description
Efficient Computation of Exposure Profiles for Counterparty Credit Risk
de Graaf, C.S.L.; Feng, Q.; Kandhai, D.; Oosterlee, C.W.
2014-01-01
Three computational techniques for approximation of counterparty exposure for financial derivatives are presented. The exposure can be used to quantify so-called Credit Valuation Adjustment (CVA) and Potential Future Exposure (PFE), which are of utmost importance for modern risk management in the
Efficient computation of exposure profiles for counterparty credit risk
C.S.L. de Graaf (Kees); Q. Feng (Qian); B.D. Kandhai; C.W. Oosterlee (Cornelis)
2014-01-01
htmlabstractThree computational techniques for approximation of counterparty exposure for financial derivatives are presented. The exposure can be used to quantify so-called Credit Valuation Adjustment (CVA) and Potential Future Exposure (PFE), which are of utmost importance for modern risk
Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance
Happola, Juho
2017-01-01
Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.
Using Weighted Graphs for Computationally Efficient WLAN Location Determination
DEFF Research Database (Denmark)
Thomsen, Bent; Hansen, Rene
2007-01-01
use of existing WLAN infrastructures. The technique consists of building a radio map of signal strength measurements which is searched to determine a position estimate. While the fingerprinting technique has produced good positioning accuracy results, the technique incurs a substantial computational...
A Memory and Computation Efficient Sparse Level-Set Method
Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.
Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the
Directory of Open Access Journals (Sweden)
Nataliia A. Khmil
2016-01-01
Full Text Available In the present article foreign and domestic experience of integrating cloud computing into pedagogical process of higher educational establishments (H.E.E. has been generalized. It has been stated that nowadays a lot of educational services are hosted in the cloud, e.g. infrastructure as a service (IaaS, platform as a service (PaaS and software as a service (SaaS. The peculiarities of implementing cloud technologies by H.E.E. in Ukraine and abroad have been singled out; the products developed by the leading IT companies for using cloud computing in higher education system, such as Microsoft for Education, Google Apps for Education and Amazon AWS Educate have been reviewed. The examples of concrete types, methods and forms of learning and research work based on cloud services have been provided.
Robin H. Kay; Sharon Lauricella
2011-01-01
Because of decreased prices, increased convenience, and wireless access, an increasing number of college and university students are using laptop computers in their classrooms. This recent trend has forced instructors to address the educational consequences of using these mobile devices. The purpose of the current study was to analyze and assess beneficial and challenging laptop behaviours in higher education classrooms. Both quantitative and qualitative data were collected from 177 undergrad...
Directory of Open Access Journals (Sweden)
Supat Faarungsang
2017-04-01
Full Text Available The Reverse Threshold Model Theory (RTMT model was introduced based on limiting factor concepts, but its efficiency compared to the Conventional Model (CM has not been published. This investigation assessed the efficiency of RTMT compared to CM using computer simulation on the “One Laptop Per Child” computer and a desktop computer. Based on probability values, it was found that RTMT was more efficient than CM among eight treatment combinations and an earlier study verified that RTMT gives complete elimination of random error. Furthermore, RTMT has several advantages over CM and is therefore proposed to be applied to most research data.
Computationally efficient statistical differential equation modeling using homogenization
Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.
2013-01-01
Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.
Wireless-Uplinks-Based Energy-Efficient Scheduling in Mobile Cloud Computing
Xing Liu; Chaowei Yuan; Zhen Yang; Enda Peng
2015-01-01
Mobile cloud computing (MCC) combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs). In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of ener...
Efficient Strategy Computation in Zero-Sum Asymmetric Repeated Games
Li, Lichun
2017-03-06
Zero-sum asymmetric games model decision making scenarios involving two competing players who have different information about the game being played. A particular case is that of nested information, where one (informed) player has superior information over the other (uninformed) player. This paper considers the case of nested information in repeated zero-sum games and studies the computation of strategies for both the informed and uninformed players for finite-horizon and discounted infinite-horizon nested information games. For finite-horizon settings, we exploit that for both players, the security strategy, and also the opponent\\'s corresponding best response depend only on the informed player\\'s history of actions. Using this property, we refine the sequence form, and formulate an LP computation of player strategies that is linear in the size of the uninformed player\\'s action set. For the infinite-horizon discounted game, we construct LP formulations to compute the approximated security strategies for both players, and provide a bound on the performance difference between the approximated security strategies and the security strategies. Finally, we illustrate the results on a network interdiction game between an informed system administrator and uniformed intruder.
Energy-efficient cloud computing : autonomic resource provisioning for datacenters
Tesfatsion, Selome Kostentinos
2018-01-01
Energy efficiency has become an increasingly important concern in data centers because of issues associated with energy consumption, such as capital costs, operating expenses, and environmental impact. While energy loss due to suboptimal use of facilities and non-IT equipment has largely been reduced through the use of best-practice technologies, addressing energy wastage in IT equipment still requires the design and implementation of energy-aware resource management systems. This thesis focu...
Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures
2017-10-04
to the memory architectures of CPUs and GPUs to obtain good performance and result in good memory performance using cache management. These methods ...Accomplishments: The PI and students has developed new methods for path and ray tracing and their Report Date: 14-Oct-2017 INVESTIGATOR(S): Phone...The efficiency of our method makes it a good candidate for forming hybrid schemes with wave-based models. One possibility is to couple the ray curve
Directory of Open Access Journals (Sweden)
A. O. Lovska
2017-02-01
Full Text Available Purpose. The article is aimed to improve supporting structures of the platform car to increase the efficiency of container transportations. Methodology. In order to achieve the objective, the strength investigations of the universal platform car of the model 13-401 were conducted, strength reserves of the supporting elements were defined, and more optimal profiles of basic longitudinal beams of the frame in terms of the minimum material capacity were proposed. Decision correctness was confirmed by the strength calculation of the platform car supporting structure at basic loading operational modes and fatigue taking into account the research database of 107 cycles. It has been proposed to equip a platform car with swing fitting stops for fastening containers on the frame, which allows transportation of 20ft and 40ft containers. In order to improve container transportation efficiency along international transport corridors running through Ukraine, a platform car of articulated type has been designed on the base of the improved platform car structure. The mathematical simulation of dynamic loads of the platform car with containers (two 1CC containers at operational loading modes has been carried out, the maximum accelerations influencing the support structure have been defined, and their multiple values have been considered in computer simulation of the strength of the platform car of articulated type. Findings. The support structure of the platform car of articulated type on the basis of the standard platform car has been developed. Refined values of dynamic loads influencing supporting structure the platform car of articulated type with containers at operational loading modes have been obtained; the maximum equivalent stresses in the platform car support structure have been defined. Originality and practical value. A mathematical model of displacements for a platform car of articulated type with containers at operational loading modes of
Work-Efficient Parallel Skyline Computation for the GPU
DEFF Research Database (Denmark)
Bøgh, Kenneth Sejdenfaden; Chester, Sean; Assent, Ira
2015-01-01
offers the potential for parallelizing skyline computation across thousands of cores. However, attempts to port skyline algorithms to the GPU have prioritized throughput and failed to outperform sequential algorithms. In this paper, we introduce a new skyline algorithm, designed for the GPU, that uses...... a global, static partitioning scheme. With the partitioning, we can permit controlled branching to exploit transitive relationships and avoid most point-to-point comparisons. The result is a non-traditional GPU algorithm, SkyAlign, that prioritizes work-effciency and respectable throughput, rather than...
An Efficient Computational Technique for Fractal Vehicular Traffic Flow
Directory of Open Access Journals (Sweden)
Devendra Kumar
2018-04-01
Full Text Available In this work, we examine a fractal vehicular traffic flow problem. The partial differential equations describing a fractal vehicular traffic flow are solved with the aid of the local fractional homotopy perturbation Sumudu transform scheme and the local fractional reduced differential transform method. Some illustrative examples are taken to describe the success of the suggested techniques. The results derived with the aid of the suggested schemes reveal that the present schemes are very efficient for obtaining the non-differentiable solution to fractal vehicular traffic flow problem.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
Selected Private Higher Educational Institutions in Metro Manila: A DEA Efficiency Measurement
de Guzman, Maria Corazon Gwendolyn N.; Cabana, Emilyn
2009-01-01
This paper measures the technical efficiency of 16 selected colleges and universities in Metro Manila, Philippines, using academic data for the SY 2001-2005. Using the data envelopment analysis (DEA), on average, schools posted 0.807 index score and need additional 19.3% efficiency growth to be efficient. Overall, there are top four efficient…
CERN. Geneva
2012-01-01
With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms at all levels of integration and programming to achieve higher performance and energy efficiency. Especially in the area of High-Performance Computing (HPC) users can entertain a combination of different hardware and software parallel architectures and programming environments. Those technologies range from vectorization and SIMD computation over shared memory multi-threading (e.g. OpenMP) to distributed memory message passing (e.g. MPI) on cluster systems. We will discuss HPC industry trends and Intel's approach to it from processor/system architectures and research activities to hardware and software tools technologies. This includes the recently announced new Intel(r) Many Integrated Core (MIC) architecture for highly-parallel workloads and general purpose, energy efficient TFLOPS performance, some of its architectural features and its programming environment. At the end we will have a br...
Higher Education Cloud Computing in South Africa: Towards Understanding Trust and Adoption issues
Directory of Open Access Journals (Sweden)
Karl Van Der Schyff
2014-12-01
Full Text Available This paper sets out to study the views of key stakeholders on the issue of cloud information security within institutions of Higher Education. A specific focus is on understanding trust and the adoption of cloud computing in context of the unique operational requirements of South African universities. Contributions are made on both a methodological and theoretical level. Methodologically the study contributes by employing an Interpretivist approach and using Thematic Analysis in a topic area often studied quantitatively, thus affording researchers the opportunity to gain the necessary in-depth insight into how key stakeholders view cloud security and trust. A theoretical contribution is made in the form of a trust-centric conceptual framework that illustrates how the qualitative data relates to concepts innate to cloud computing trust and adoption. Both these contributions lend credence to the fact that there is a need to address cloud information security with a specific focus on the contextual elements that surround South African universities. The paper concludes with some considerations for implementing and investigating cloud computing services in Higher Education contexts in South Africa.
Efficient relaxed-Jacobi smoothers for multigrid on parallel computers
Yang, Xiang; Mittal, Rajat
2017-03-01
In this Technical Note, we present a family of Jacobi-based multigrid smoothers suitable for the solution of discretized elliptic equations. These smoothers are based on the idea of scheduled-relaxation Jacobi proposed recently by Yang & Mittal (2014) [18] and employ two or three successive relaxed Jacobi iterations with relaxation factors derived so as to maximize the smoothing property of these iterations. The performance of these new smoothers measured in terms of convergence acceleration and computational workload, is assessed for multi-domain implementations typical of parallelized solvers, and compared to the lexicographic point Gauss-Seidel smoother. The tests include the geometric multigrid method on structured grids as well as the algebraic grid method on unstructured grids. The tests demonstrate that unlike Gauss-Seidel, the convergence of these Jacobi-based smoothers is unaffected by domain decomposition, and furthermore, they outperform the lexicographic Gauss-Seidel by factors that increase with domain partition count.
Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model
Directory of Open Access Journals (Sweden)
Rory A. Roberts
2014-01-01
Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.
Efficient Computational Design of a Scaffold for Cartilage Cell Regeneration
DEFF Research Database (Denmark)
Tajsoleiman, Tannaz; Jafar Abdekhodaie, Mohammad; Gernaey, Krist V.
2018-01-01
Due to the sensitivity of mammalian cell cultures, understanding the influence of operating conditions during a tissue generation procedure is crucial. In this regard, a detailed study of scaffold based cell culture under a perfusion flow is presented with the aid of mathematical modelling...... and computational fluid dynamics (CFD). With respect to the complexity of the case study, this work focuses solely on the effect of nutrient and metabolite concentrations, and the possible influence of fluid-induced shear stress on a targeted cell (cartilage) culture. The simulation set up gives the possibility...... of predicting the cell culture behavior under various operating conditions and scaffold designs. Thereby, the exploitation of the predictive simulation into a newly developed stochastic routine provides the opportunity of exploring improved scaffold geometry designs. This approach was applied on a common type...
The computational optimization of heat exchange efficiency in stack chimneys
Energy Technology Data Exchange (ETDEWEB)
Van Goch, T.A.J.
2012-02-15
For many industrial processes, the chimney is the final step before hot fumes, with high thermal energy content, are discharged into the atmosphere. Tapping into this energy and utilizing it for heating or cooling applications, could improve sustainability, efficiency and/or reduce operational costs. Alternatively, an unused chimney, like the monumental chimney at the Eindhoven University of Technology, could serve as an 'energy channeler' once more; it can enhance free cooling by exploiting the stack effect. This study aims to identify design parameters that influence annual heat exchange in such stack chimney applications and optimize these parameters for specific scenarios to maximize the performance. Performance is defined by annual heat exchange, system efficiency and costs. The energy required for the water pump as compared to the energy exchanged, defines the system efficiency, which is expressed in an efficiency coefficient (EC). This study is an example of applying building performance simulation (BPS) tools for decision support in the early phase of the design process. In this study, BPS tools are used to provide design guidance, performance evaluation and optimization. A general method for optimization of simulation models will be studied, and applied in two case studies with different applications (heating/cooling), namely; (1) CERES case: 'Eindhoven University of Technology monumental stack chimney equipped with a heat exchanger, rejects heat to load the cold source of the aquifer system on the campus of the university and/or provides free cooling to the CERES building'; and (2) Industrial case: 'Heat exchanger in an industrial stack chimney, which recoups heat for use in e.g. absorption cooling'. The main research question, addressing the concerns of both cases, is expressed as follows: 'what is the optimal set of design parameters so heat exchange in stack chimneys is optimized annually for the cases in which a
Lin, Wushao; Bi, Lei; Liu, Dong; Zhang, Kejun
2017-08-21
The extinction efficiencies of atmospheric particles are essential to determining radiation attenuation and thus are fundamentally related to atmospheric radiative transfer. The extinction efficiencies can also be used to retrieve particle sizes or refractive indices through particle characterization techniques. This study first uses the Debye series to improve the accuracy of high-frequency extinction formulae for spheroids in the context of Complex angular momentum theory by determining an optimal number of edge-effect terms. We show that the optimal edge-effect terms can be accurately obtained by comparing the results from the approximate formula with their counterparts computed from the invariant imbedding Debye series and T-matrix methods. An invariant imbedding T-matrix method is employed for particles with strong absorption, in which case the extinction efficiency is equivalent to two plus the edge-effect efficiency. For weakly absorptive or non-absorptive particles, the T-matrix results contain the interference between the diffraction and higher-order transmitted rays. Therefore, the Debye series was used to compute the edge-effect efficiency by separating the interference from the transmission on the extinction efficiency. We found that the optimal number strongly depends on the refractive index and is relatively insensitive to the particle geometry and size parameter. By building a table of optimal numbers of edge-effect terms, we developed an efficient and accurate extinction simulator that has been fully tested for randomly oriented spheroids with various aspect ratios and a wide range of refractive indices.
The peak efficiency calibration of volume source using 152Eu point source in computer
International Nuclear Information System (INIS)
Shen Tingyun; Qian Jianfu; Nan Qinliang; Zhou Yanguo
1997-01-01
The author describes the method of the peak efficiency calibration of volume source by means of 152 Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%
Towards the Automatic Detection of Efficient Computing Assets in a Heterogeneous Cloud Environment
Iglesias, Jesus Omana; Stokes, Nicola; Ventresque, Anthony; Murphy, Liam, B.E.; Thorburn, James
2013-01-01
peer-reviewed In a heterogeneous cloud environment, the manual grading of computing assets is the first step in the process of configuring IT infrastructures to ensure optimal utilization of resources. Grading the efficiency of computing assets is however, a difficult, subjective and time consuming manual task. Thus, an automatic efficiency grading algorithm is highly desirable. In this paper, we compare the effectiveness of the different criteria used in the manual gr...
Yilmaz, Ferkan; Tabassum, Hina; Alouini, Mohamed-Slim
2014-01-01
Higher order statistics (HOS) of the channel capacity provide useful information regarding the level of reliability of signal transmission at a particular rate. In this paper, we propose a novel and unified analysis, which is based on the moment-generating function (MGF) approach, to efficiently and accurately compute the HOS of the channel capacity for amplify-and-forward (AF) multihop transmission over generalized fading channels. More precisely, our easy-to-use and tractable mathematical formalism requires only the reciprocal MGFs of the transmission hop signal-to-noise ratio (SNR). Numerical and simulation results, which are performed to exemplify the usefulness of the proposed MGF-based analysis, are shown to be in perfect agreement. © 2013 IEEE.
An efficient network for interconnecting remote monitoring instruments and computers
International Nuclear Information System (INIS)
Halbig, J.K.; Gainer, K.E.; Klosterbuer, S.F.
1994-01-01
Remote monitoring instrumentation must be connected with computers and other instruments. The cost and intrusiveness of installing cables in new and existing plants presents problems for the facility and the International Atomic Energy Agency (IAEA). The authors have tested a network that could accomplish this interconnection using mass-produced commercial components developed for use in industrial applications. Unlike components in the hardware of most networks, the components--manufactured and distributed in North America, Europe, and Asia--lend themselves to small and low-powered applications. The heart of the network is a chip with three microprocessors and proprietary network software contained in Read Only Memory. In addition to all nonuser levels of protocol, the software also contains message authentication capabilities. This chip can be interfaced to a variety of transmission media, for example, RS-485 lines, fiber topic cables, rf waves, and standard ac power lines. The use of power lines as the transmission medium in a facility could significantly reduce cabling costs
Enabling Efficient Climate Science Workflows in High Performance Computing Environments
Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.
2015-12-01
A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.
Computationally efficient thermal-mechanical modelling of selective laser melting
Yang, Yabin; Ayas, Can
2017-10-01
The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.
Efficient Minimum-Phase Prefilter Computation Using Fast QL-Factorization
DEFF Research Database (Denmark)
Hansen, Morten; Christensen, Lars P.B.
2009-01-01
This paper presents a novel approach for computing both the minimum-phase filter and the associated all-pass filter in a computationally efficient way using the fast QL-factorization. A desirable property of this approach is that the complexity is independent on the size of the matrix which is QL...
Energy-Efficient Abundant-Data Computing: The N3XT 1,000X
Aly Mohamed M. Sabry; Gao Mingyu; Hills Gage; Lee Chi-Shuen; Pinter Greg; Shulaker Max M.; Wu Tony F.; Asheghi Mehdi; Bokor Jeff; Franchetti Franz; Goodson Kenneth E.; Kozyrakis Christos; Markov Igor; Olukotun Kunle; Pileggi Larry
2015-01-01
Next generation information technologies will process unprecedented amounts of loosely structured data that overwhelm existing computing systems. N3XT improves the energy efficiency of abundant data applications 1000 fold by using new logic and memory technologies 3D integration with fine grained connectivity and new architectures for computation immersed in memory.
The thermodynamic efficiency of computations made in cells across the range of life
Kempes, Christopher P.; Wolpert, David; Cohen, Zachary; Pérez-Mercader, Juan
2017-11-01
Biological organisms must perform computation as they grow, reproduce and evolve. Moreover, ever since Landauer's bound was proposed, it has been known that all computation has some thermodynamic cost-and that the same computation can be achieved with greater or smaller thermodynamic cost depending on how it is implemented. Accordingly an important issue concerning the evolution of life is assessing the thermodynamic efficiency of the computations performed by organisms. This issue is interesting both from the perspective of how close life has come to maximally efficient computation (presumably under the pressure of natural selection), and from the practical perspective of what efficiencies we might hope that engineered biological computers might achieve, especially in comparison with current computational systems. Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound. However, this efficiency depends strongly on the size and architecture of the cell in question. In particular, we show that the useful efficiency of an amino acid operation, defined as the bulk energy per amino acid polymerization, decreases for increasing bacterial size and converges to the polymerization cost of the ribosome. This cost of the largest bacteria does not change in cells as we progress through the major evolutionary shifts to both single- and multicellular eukaryotes. However, the rates of total computation per unit mass are non-monotonic in bacteria with increasing cell size, and also change across different biological architectures, including the shift from unicellular to multicellular eukaryotes. This article is part of the themed issue 'Reconceptualizing the origins of life'.
Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics
Fijany, Amir; Scheid, Robert E.
1989-01-01
The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, Daniel; Rojas, Leorlen; Rosales, Jesus; Castro, Landy; Gamez, Abel; Brayner, Carlos, E-mail: danielgonro@gmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Garcia, Lazaro; Garcia, Carlos; Torre, Raciel de la, E-mail: lgarcia@instec.cu [Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba); Sanchez, Danny [Universidade Estadual de Santa Cruz (UESC), Ilheus, BA (Brazil)
2015-07-01
High temperature electrolysis process coupled to a very high temperature reactor (VHTR) is one of the most promising methods for hydrogen production using a nuclear reactor as the primary heat source. However there are not references in the scientific publications of a test facility that allow to evaluate the efficiency of the process and other physical parameters that has to be taken into consideration for its accurate application in the hydrogen economy as a massive production method. For this lack of experimental facilities, mathematical models are one of the most used tools to study this process and theirs flowsheets, in which the electrolyzer is the most important component because of its complexity and importance in the process. A computational fluid dynamic (CFD) model for the evaluation and optimization of the electrolyzer of a high temperature electrolysis hydrogen production process flowsheet was developed using ANSYS FLUENT®. Electrolyzer's operational and design parameters will be optimized in order to obtain the maximum hydrogen production and the higher efficiency in the module. This optimized model of the electrolyzer will be incorporated to a chemical process simulation (CPS) code to study the overall high temperature flowsheet coupled to a high temperature accelerator driven system (ADS) that offers advantages in the transmutation of the spent fuel. (author)
International Nuclear Information System (INIS)
Gonzalez, Daniel; Rojas, Leorlen; Rosales, Jesus; Castro, Landy; Gamez, Abel; Brayner, Carlos; Garcia, Lazaro; Garcia, Carlos; Torre, Raciel de la; Sanchez, Danny
2015-01-01
High temperature electrolysis process coupled to a very high temperature reactor (VHTR) is one of the most promising methods for hydrogen production using a nuclear reactor as the primary heat source. However there are not references in the scientific publications of a test facility that allow to evaluate the efficiency of the process and other physical parameters that has to be taken into consideration for its accurate application in the hydrogen economy as a massive production method. For this lack of experimental facilities, mathematical models are one of the most used tools to study this process and theirs flowsheets, in which the electrolyzer is the most important component because of its complexity and importance in the process. A computational fluid dynamic (CFD) model for the evaluation and optimization of the electrolyzer of a high temperature electrolysis hydrogen production process flowsheet was developed using ANSYS FLUENT®. Electrolyzer's operational and design parameters will be optimized in order to obtain the maximum hydrogen production and the higher efficiency in the module. This optimized model of the electrolyzer will be incorporated to a chemical process simulation (CPS) code to study the overall high temperature flowsheet coupled to a high temperature accelerator driven system (ADS) that offers advantages in the transmutation of the spent fuel. (author)
Energy Technology Data Exchange (ETDEWEB)
Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)
2014-01-31
The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.
On efficiently computing multigroup multi-layer neutron reflection and transmission conditions
International Nuclear Information System (INIS)
Abreu, Marcos P. de
2007-01-01
In this article, we present an algorithm for efficient computation of multigroup discrete ordinates neutron reflection and transmission conditions, which replace a multi-layered boundary region in neutron multiplication eigenvalue computations with no spatial truncation error. In contrast to the independent layer-by-layer algorithm considered thus far in our computations, the algorithm here is based on an inductive approach developed by the present author for deriving neutron reflection and transmission conditions for a nonactive boundary region with an arbitrary number of arbitrarily thick layers. With this new algorithm, we were able to increase significantly the computational efficiency of our spectral diamond-spectral Green's function method for solving multigroup neutron multiplication eigenvalue problems with multi-layered boundary regions. We provide comparative results for a two-group reactor core model to illustrate the increased efficiency of our spectral method, and we conclude this article with a number of general remarks. (author)
Impact of higher energy efficiency standards on housing affordability in Alberta
International Nuclear Information System (INIS)
2010-07-01
As a result of changes to provincial and national building and energy costs, the impact of increasing energy efficiency standards on housing affordability has been questioned. Determining housing affordability is a complicated process. This report presented the results of a costing analysis completed for upgrades of EnerGuide 80 levels of energy efficiency in homes in Calgary and Edmonton, Alberta. The elements of residential construction were identified. In order to better understand the cost impact of energy efficiency upgrades on a home, pricing data was obtained. Costing elements that were examined included housing price indexes; construction material price indexes; unionized trade wages; and land value. Specifically, the report presented the new housing price index analysis using material and labour costs. An analysis of energy efficiency improvement was then presented in terms of lifecycle costs (capital costs and life cycle costing results). It was concluded that although the price of labour and materials is increasing, the value of land is the primary driver for rising house prices. The price of housing is strongly correlated to the price of land and not the price of labour or materials. In addition, moving to EnerGuide 80 levels of energy efficiency for housing in Alberta made homes more affordable for homebuyers by lowering their total monthly housing costs. 4 tabs., 3 figs., 3 appendices.
Energy- and cost-efficient lattice-QCD computations using graphics processing units
Energy Technology Data Exchange (ETDEWEB)
Bach, Matthias
2014-07-01
Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD
Energy- and cost-efficient lattice-QCD computations using graphics processing units
International Nuclear Information System (INIS)
Bach, Matthias
2014-01-01
Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD
International Nuclear Information System (INIS)
Woodruff, S.B.
1994-01-01
The Transient Reactor Analysis Code (TRAC), which features a two-fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local, the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, a fixed, uniform assignment of nodes to prallel processors will result in degraded computational efficiency due to the poor load balancing. A standard method for treating data-dependent models on vector architectures has been to use gather operations (or indirect adressing) to sort the nodes into subsets that (temporarily) share a common computational model. However, this method is not effective on distributed memory data parallel architectures, where indirect adressing involves expensive communication overhead. Another serious problem with this method involves software engineering challenges in the areas of maintainability and extensibility. For example, an implementation that was hand-tuned to achieve good computational efficiency would have to be rewritten whenever the decision tree governing the sorting was modified. Using an example based on the calculation of the wall-to-liquid and wall-to-vapor heat-transfer coefficients for three nonboiling flow regimes, we describe how the use of the Fortran 90 WHERE construct and automatic inlining of functions can be used to ameliorate this problem while improving both efficiency and software engineering. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. We discuss why developers should either wait for such solutions or consider alternative numerical algorithms, such as a neural network
Oback, Björn
2008-07-01
Despite more than a decade of research efforts, farm animal cloning by somatic cell nuclear transfer (SCNT) is still frustratingly inefficient. Inefficiency manifests itself at different levels, which are currently not well integrated. At the molecular level, it leads to widespread genetic, epigenetic and transcriptional aberrations in cloned embryos. At the organismal level, these genome-wide abnormalities compromise development of cloned foetuses and offspring. Specific molecular defects need to be causally linked to specific cloned phenotypes, in order to design specific treatments to correct them. Cloning efficiency depends on the ability of the nuclear donor cell to be fully reprogrammed into an embryonic state and the ability of the enucleated recipient cell to carry out the reprogramming reactions. It has been postulated that reprogrammability of the somatic donor cell epigenome is influenced by its differentiation status. However, direct comparisons between cells of divergent differentiation status within several somatic lineages have found no conclusive evidence for this. Choosing somatic stem cells as donors has not improved cloning efficiency, indicating that donor cell type may be less critical for cloning success. Different recipient cells, on the other hand, vary in their reprogramming ability. In bovine, using zygotes instead of oocytes has increased cloning success. Other improvements in livestock cloning efficiency include better coordinating donor cell type with cell cycle stage and aggregating cloned embryos. In the future, it will be important to demonstrate if these small increases at every step are cumulative, adding up to an integrated cloning protocol with greatly improved efficiency.
International Nuclear Information System (INIS)
Goodarzi, M.; Ramezanpour, R.
2014-01-01
Highlights: • Alternative cross sections for natural draft cooling tower were proposed. • Numerical solution was applied to study thermal and hydraulic performances. • Thermal and hydraulic performances were assessed by comparative parameters. • Cooling tower with elliptical cross section had better thermal performance under crosswind. • It could successfully used at the regions with invariant wind direction. - Abstract: Cooling efficiency of a natural draft dry cooling tower may significantly decrease under crosswind condition. Therefore, many researchers attempted to improve the cooling efficiency under this condition by using structural or mechanical facilities. In this article, alternative shell geometry with elliptical cross section is proposed for this type of cooling tower instead of usual shell geometry with circular cross section. Thermal performance and cooling efficiency of the two types of cooling towers are numerically investigated. Numerical simulations show that cooling tower with elliptical cross section improves the cooling efficiency compared to the usual type with circular cross section under high-speed wind moving normal to the longitudinal diameter of the elliptical cooling tower
Automation of analytical processes. A tool for higher efficiency and safety
International Nuclear Information System (INIS)
Groll, P.
1976-01-01
The analytical laboratory of a radiochemical facility is usually faced with the fact that numerous analyses of a similar type must be routinely carried out. Automation of such routine analytical procedures helps in increasing the efficiency and safety of the work. A review of the requirements for automation and its advantages is given and demonstrated on three examples. (author)
Wireless-Uplinks-Based Energy-Efficient Scheduling in Mobile Cloud Computing
Directory of Open Access Journals (Sweden)
Xing Liu
2015-01-01
Full Text Available Mobile cloud computing (MCC combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs. In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of energy-efficient scheduling for wireless uplink in MCC. By introducing Lyapunov optimization, we first propose a scheduling algorithm that can dynamically choose channel to transmit data based on queue backlog and channel statistics. Then, we show that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in a channel-aware MCC system. Simulation results show that the proposed scheduling algorithm can reduce the time average energy consumption for offloading compared to the existing algorithm.
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.
Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia
2016-03-08
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.
McClure, Kevin R.
2017-01-01
A growing number of public colleges and universities in the United States have hired management consulting firms to help develop strategies aimed at increasing institutional effectiveness and efficiency. The purpose of this paper is to explore the frames and strategies of consultants in US public higher education reform efforts. Drawing upon a…
Lu, Yung-Hsiang; Chen, Ku-Hsieh
2013-01-01
This paper aims at appraising the cost efficiency and technology of institutions of higher technological and vocational education. Differing from conventional literature, it considers the potential influence of inherent discrepancies in output quality and characteristics of school systems for institutes of technology (ITs) and universities of…
International Nuclear Information System (INIS)
Woodruff, S.B.
1992-01-01
The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems
Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang
2017-12-01
Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.
Olaskoaga-Larrauri, Jon; Barrenetxea-Ayesta, Miren; Cardona-Rodríguez, Antonio; Mijangos-Del Campo, Juan José; Barandiaran-Galdós, Marta
2016-01-01
The literature on quality management at higher education institutions has for some time been working on the basis of two issues: a) the diversity of ideas as to what "quality" means, which makes it harder to apply the principles of quality management in this context; and b) the idea that this diversity is in some way a response to the…
Managing the higher risks of low-cost high-efficiency advanced power generation technologies
International Nuclear Information System (INIS)
Pearson, M.
1997-01-01
Independent power producers operate large coal-fired installations and gas turbine combined-cycle (GTCC) facilities. Combined cycle units are complex and their reliability and availability is greatly influenced by mechanical, instrumentation and control weaknesses. It was suggested that these weaknesses could be avoided by tighter specifications and more rigorous functional testing before acceptance by the owner. For the present, the difficulties of developing reliable, lower installed cost/kw, more efficient GTCC designs, pressure for lower NO x emissions with 'dry' combustors continue to be the most difficult challenges for all GT manufacturers
Energy Technology Data Exchange (ETDEWEB)
Diez, Rainer; Kornherr, Heinz; Pirntke, Frank; Schmidt, Juergen [Friedrich Boysen GmbH und Co. KG, Altensteig (Germany)
2010-05-15
In close interdisciplinary cooperation with BMW Group, Boysen has developed an air-gap-insulated exhaust manifold that encompasses both banks of the 4.4 l V8 spark-ignition twin turbo engine of the BMW X5 M and BMW X6 M. The manifold merges the exhaust gas flow from the cylinders of the left-hand and right-hand cylinder banks in opposing pairs, thus optimising gas exchange. Due to improvements in response, torque and power characteristics of the engine, the cylinder-bank comprehensive exhaust manifold helps achieve high fuel efficiency. (orig.)
A new computationally-efficient computer program for simulating spectral gamma-ray logs
International Nuclear Information System (INIS)
Conaway, J.G.
1995-01-01
Several techniques to improve the accuracy of radionuclide concentration estimates as a function of depth from gamma-ray logs have appeared in the literature. Much of that work was driven by interest in uranium as an economic mineral. More recently, the problem of mapping and monitoring artificial gamma-emitting contaminants in the ground has rekindled interest in improving the accuracy of radioelement concentration estimates from gamma-ray logs. We are looking at new approaches to accomplishing such improvements. The first step in this effort has been to develop a new computational model of a spectral gamma-ray logging sonde in a borehole environment. The model supports attenuation in any combination of materials arranged in 2-D cylindrical geometry, including any combination of attenuating materials in the borehole, formation, and logging sonde. The model can also handle any distribution of sources in the formation. The model considers unscattered radiation only, as represented by the background-corrected area under a given spectral photopeak as a function of depth. Benchmark calculations using the standard Monte Carlo model MCNP show excellent agreement with total gamma flux estimates with a computation time of about 0.01% of the time required for the MCNP calculations. This model lacks the flexibility of MCNP, although for this application a great deal can be accomplished without that flexibility
van der Wegen, M.; Jaffe, B.E.; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.
2013-01-01
Measured bathymetries on 30 year interval over the past 150 years show that San Pablo Bay experienced periods of considerable deposition followed by periods of net erosion. However, the main channel in San Pablo Bay has continuously narrowed. The underlying mechanisms and consequences of this tidal channel evolution are not well understood. The central question of this study is whether tidal channels evolve towards a geometry that leads to more efficient hydraulic conveyance and sediment throughput. We applied a hydrodynamic process-based, numerical model (Delft3D), which was run on 5 San Pablo Bay bathymetries measured between 1856 and 1983. Model results shows increasing energy dissipation levels for lower water flows leading to an approximately 15% lower efficiency in 1983 compared to 1856. During the same period the relative seaward sediment throughput through the San Pablo Bay main channel increased by 10%. A probable explanation is that San Pablo Bay is still affected by the excessive historic sediment supply. Sea level rise and Delta surface water area variations over 150 years have limited effect on the model results. With expected lower sediment concentrations in the watershed and less impact of wind waves due to erosion of the shallow flats, it is possible that energy dissipations levels will decrease again in future decades. Our study suggests that the morphodynamic adaptation time scale to excessive variations in sediment supply to estuaries may be on the order of centuries.
Spin-neurons: A possible path to energy-efficient neuromorphic computers
Energy Technology Data Exchange (ETDEWEB)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik [School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)
2013-12-21
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.
Berrian, Djaber; Fathi, Mohamed; Kechouane, Mohamed
2018-02-01
Bifacial solar cells that maximize the energy output per a square meter have become a new fashion in the field of photovoltaic cells. However, the application of thin-film material on bifacial solar cells, viz., thin-film amorphous hydrogenated silicon ( a- Si:H), is extremely rare. Therefore, this paper presents the optimization and influence of the band gap, thickness and doping on the performance of a glass/glass thin-film a- Si:H ( n- i- p) bifacial solar cell, using a computer-aided simulation tool, Automat for simulation of hetero-structures (AFORS-HET). It is worth mentioning that the thickness and the band gap of the i-layer are the key parameters in achieving higher efficiency and hence it has to be handled carefully during the fabrication process. Furthermore, an efficient thin-film a- Si:H bifacial solar cell requires thinner and heavily doped n and p emitter layers. On the other hand, the band gap of the p-layer showed a dramatic reduction of the efficiency at 2.3 eV. Moreover, a high bifaciality factor of more than 92% is attained, and top efficiency of 10.9% is revealed under p side illumination. These optimizations demonstrate significant enhancements of the recent experimental work on thin-film a- Si:H bifacial solar cells and would also be useful for future experimental investigations on an efficient a- Si:H thin-film bifacial solar cell.
Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin
2018-03-01
Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.
Reshaping Computer Literacy Teaching in Higher Education: Identification of Critical Success Factors
Taylor, Estelle; Goede, Roelien; Steyn, Tjaart
2011-01-01
Purpose: Acquiring computer skills is more important today than ever before, especially in a developing country. Teaching of computer skills, however, has to adapt to new technology. This paper aims to model factors influencing the success of the learning of computer literacy by means of an e-learning environment. The research question for this…
Opinions on Computing Education in Korean K-12 System: Higher Education Perspective
Kim, Dae-Kyoo; Jeong, Dongwon; Lu, Lunjin; Debnath, Debatosh; Ming, Hua
2015-01-01
The need for computing education in the K-12 curriculum has grown globally. The Republic of Korea is not an exception. In response to the need, the Korean Ministry of Education has announced an outline for software-centric computing education in the K-12 system, which aims at enhancing the current computing education with software emphasis. In…
Which Are the Determinants of Online Students' Efficiency in Higher Education?
Castillo-Merino, David; Serradell-Lopez, Enric; González-González, Inés
International literature shows that the positive effect on students performance from the adoption of innovations in the technology of teaching and learning do not affect all teaching methods and learning styles equally, as it depends on university strategy and policy towards Information and Communication Technologies (ICT) adoption, students abilities, technology uses in the educational process by teachers and students, or the selection of a methodology that matches with digital uses. This paper provides empirical answers to these questions with data from online students at the Open University of Catalonia (UOC). An empirical model based on structural equations has been defined to explain complex relationships between variables. Our results show that motivation is the main variable affecting online students' performance. It appears as a latent variable influenced by students' perception of efficiency, a driver for indirect positive and significant effect on students' performance from students' ability in ICT uses.
Sanders, Dirk; Moser, Andrea; Newton, Jason; van Veen, F J Frank
2016-03-16
Trophic assimilation efficiency (conversion of resource biomass into consumer biomass) is thought to be a limiting factor for food chain length in natural communities. In host-parasitoid systems, which account for the majority of terrestrial consumer interactions, a high trophic assimilation efficiency may be expected at higher trophic levels because of the close match of resource composition of host tissue and the consumer's resource requirements, which would allow for longer food chains. We measured efficiency of biomass transfer along an aphid-primary-secondary-tertiary parasitoid food chain and used stable isotope analysis to confirm trophic levels. We show high efficiency in biomass transfer along the food chain. From the third to the fourth trophic level, the proportion of host biomass transferred was 45%, 65% and 73%, respectively, for three secondary parasitoid species. For two parasitoid species that can act at the fourth and fifth trophic levels, we show markedly increased trophic assimilation efficiencies at the higher trophic level, which increased from 45 to 63% and 73 to 93%, respectively. In common with other food chains, δ(15)N increased with trophic level, with trophic discrimination factors (Δ(15)N) 1.34 and 1.49‰ from primary parasitoids to endoparasitic and ectoparasitic secondary parasitoids, respectively, and 0.78‰ from secondary to tertiary parasitoids. Owing to the extraordinarily high efficiency of hyperparasitoids, cryptic higher trophic levels may exist in host-parasitoid communities, which could alter our understanding of the dynamics and drivers of community structure of these important systems. © 2016 The Authors.
An efficient algorithm to compute subsets of points in ℤ n
Pacheco Martínez, Ana María; Real Jurado, Pedro
2012-01-01
In this paper we show a more efficient algorithm than that in [8] to compute subsets of points non-congruent by isometries. This algorithm can be used to reconstruct the object from the digital image. Both algorithms are compared, highlighting the improvements obtained in terms of CPU time.
Efficient Computation of Transition State Resonances and Reaction Rates from a Quantum Normal Form
Schubert, Roman; Waalkens, Holger; Wiggins, Stephen
2006-01-01
A quantum version of a recent formulation of transition state theory in phase space is presented. The theory developed provides an algorithm to compute quantum reaction rates and the associated Gamov-Siegert resonances with very high accuracy. The algorithm is especially efficient for
Defect correction and multigrid for an efficient and accurate computation of airfoil flows
Koren, B.
1988-01-01
Results are presented for an efficient solution method for second-order accurate discretizations of the 2D steady Euler equations. The solution method is based on iterative defect correction. Several schemes are considered for the computation of the second-order defect. In each defect correction
A computationally efficient 3D finite-volume scheme for violent liquid–gas sloshing
CSIR Research Space (South Africa)
Oxtoby, Oliver F
2015-10-01
Full Text Available We describe a semi-implicit volume-of-fluid free-surface-modelling methodology for flow problems involving violent free-surface motion. For efficient computation, a hybrid-unstructured edge-based vertex-centred finite volume discretisation...
Energy Technology Data Exchange (ETDEWEB)
Penedo, M., E-mail: mapenedo@imm.cnm.csic.es; Hormeño, S.; Fernández-Martínez, I.; Luna, M.; Briones, F. [IMM-Instituto de Microelectrónica de Madrid (CNM-CSIC), Isaac Newton 8, PTM, E-28760 Tres Cantos, Madrid (Spain); Raman, A. [Birck Nanotechnology Center and School of Mechanical Engineering, Purdue University, West Lafayette, Indiana 47904 (United States)
2014-10-27
Recent developments in dynamic Atomic Force Microscopy where several eigenmodes are simultaneously excited in liquid media are proving to be an excellent tool in biological studies. Despite its relevance, the search for a reliable, efficient, and strong cantilever excitation method is still in progress. Herein, we present a theoretical modeling and experimental results of different actuation methods compatible with the operation of Atomic Force Microscopy in liquid environments: ideal acoustic, homogeneously distributed force, distributed applied torque (MAC Mode™), photothermal and magnetostrictive excitation. From the analysis of the results, it can be concluded that magnetostriction is the strongest and most efficient technique for higher eigenmode excitation when using soft cantilevers in liquid media.
Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.
Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F
2011-03-01
This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.
A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI
International Nuclear Information System (INIS)
Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G; Odille, F; Atkinson, D
2011-01-01
Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved. (note)
Labibian, Amir; Bahrami, Amir Hossein; Haghshenas, Javad
2017-09-01
This paper presents a computationally efficient algorithm for attitude estimation of remote a sensing satellite. In this study, gyro, magnetometer, sun sensor and star tracker are used in Extended Kalman Filter (EKF) structure for the purpose of Attitude Determination (AD). However, utilizing all of the measurement data simultaneously in EKF structure increases computational burden. Specifically, assuming n observation vectors, an inverse of a 3n×3n matrix is required for gain calculation. In order to solve this problem, an efficient version of EKF, namely Murrell's version, is employed. This method utilizes measurements separately at each sampling time for gain computation. Therefore, an inverse of a 3n×3n matrix is replaced by an inverse of a 3×3 matrix for each measurement vector. Moreover, gyro drifts during the time can reduce the pointing accuracy. Therefore, a calibration algorithm is utilized for estimation of the main gyro parameters.
On the Computation of the Efficient Frontier of the Portfolio Selection Problem
Directory of Open Access Journals (Sweden)
Clara Calvo
2012-01-01
Full Text Available An easy-to-use procedure is presented for improving the ε-constraint method for computing the efficient frontier of the portfolio selection problem endowed with additional cardinality and semicontinuous variable constraints. The proposed method provides not only a numerical plotting of the frontier but also an analytical description of it, including the explicit equations of the arcs of parabola it comprises and the change points between them. This information is useful for performing a sensitivity analysis as well as for providing additional criteria to the investor in order to select an efficient portfolio. Computational results are provided to test the efficiency of the algorithm and to illustrate its applications. The procedure has been implemented in Mathematica.
Gnasekaran, Pavallekoodi; Antony, Jessica Jeyanthi James; Uddain, Jasim; Subramaniam, Sreeramanan
2014-01-01
The presented study established Agrobacterium-mediated genetic transformation using protocorm-like bodies (PLBs) for the production of transgenic Vanda Kasem's Delight Tom Boykin (VKD) orchid. Several parameters such as PLB size, immersion period, level of wounding, Agrobacterium density, cocultivation period, and concentration of acetosyringone were tested and quantified using gusA gene expression to optimize the efficiency of Agrobacterium-mediated genetic transformation of VKD's PLBs. Based on the results, 3-4 mm PLBs wounded by scalpel and immersed for 30 minutes in Agrobacterium suspension of 0.8 unit at A 600 nm produced the highest GUS expression. Furthermore, cocultivating infected PLBs for 4 days in the dark on Vacin and Went cocultivation medium containing 200 μM acetosyringone enhanced the GUS expression. PCR analysis of the putative transformants selected in the presence of 250 mg/L cefotaxime and 30 mg/L geneticin proved the presence of wheatwin1, wheatwin2, and nptII genes.
Energy Technology Data Exchange (ETDEWEB)
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
Computational efficiency using the CYBER-205 computer for the PACER Monte Carlo Program
International Nuclear Information System (INIS)
Candelore, N.R.; Maher, C.M.; Gast, R.C.
1985-09-01
The use of the large memory of the CYBER-205 and its vector data handling logic produced speedups over scalar code ranging from a factor of 7 for unit cell calculations with relatively few compositions to a factor of 5 for problems having more detailed geometry and materials. By vectorizing the neutron tracking in PACER (the collision analysis remained in scalar code), an asymptotic value of 200 neutrons/cpu-second was achieved for a batch size of 10,000 neutrons. The complete vectorization of the Monte Carlo method as performed by Brown resulted in even higher speedups in neutron processing rates over the use of scalar code. Large speedups in neutron processing rates are beneficial not only to achieve more accurate results for the neutronics calculations which are routinely done using Monte Carlo, but also to extend the use of the Monte Carlo method to applications that were previously considered impractical because of large running times
Increasing the efficiency of sulphur dioxide in wine by using of saturated higher fatty acids
Directory of Open Access Journals (Sweden)
Petra Bábíková
2012-01-01
Full Text Available This work is aimed on stopping of alcoholic fermentation to leave residual sugar and the possibility of sulfur dioxide reduction in wine technology and storage. As a very good opportunity showed mixture of higher saturated fatty acids with a reduced dose of sulfur dioxide. Experiments have confirmed that the concentration of viable yeasts in 1 ml of wine for variants treated with a mixture of fatty acids is significantly lower than in variants treated with sulfur dioxide alone. Then was monitored the influence of fatty acids on stored wine with residual sugar. At this point a dramatically prolongation of interval to secondary fermentation (depreciation of wine in the bottle was confirmed. Finally, attention was paid to influence on the organoleptic characteristics of wine treated this way. In this case, it is possible to consider the recommended concentration of fatty acid below the threshold of susceptibility.
Increasing emitter efficiency in 3.3-kV enhanced trench IGBTs for higher short-circuit capability
DEFF Research Database (Denmark)
Reigosa, Paula Diaz; Iannuzzo, Francesco; Rahimo, Munaf
2018-01-01
In this paper, a 3.3-kV Enhanced Trench IGBT has been designed with a high emitter efficiency, for improving its short-circuit robustness. The carrier distribution profile has been shaped in a way that it is possible to increase the electric field at the surface of the IGBT, and thereby, counteract...... the Kirk Effect onset. This design approach is beneficial for mitigating high-frequency oscillations, typically observed in IGBTs under short-circuit conditions. The effectiveness of the proposed design rule is validated by means of mixed-mode device simulations. Then, two IGBTs have been fabricated...... with different emitter efficiencies and tested under short circuit, validating that the high-frequency oscillations can be mitigated, with higher emitter efficiency IGBT designs....
Pye, John; Hughes, Graham; Abbasi, Ehsan; Asselineau, Charles-Alexis; Burgess, Greg; Coventry, Joe; Logie, Will; Venn, Felix; Zapata, José
2016-05-01
An integrated model for an axisymmetric helical-coil tubular cavity receiver is presented, incorporating optical ray-tracing for incident solar flux, radiosity analysis for thermal emissions, computational fluid dynamics for external convection, and a one-dimensional hydrodynamic model for internal flow-boiling of water. A receiver efficiency of 98.7% is calculated, for an inlet/outlet temperature range of 60-500 °C, which is the ratio of fluid heating to receiver incident irradiance. The high-efficiency design makes effective use of non-uniform flux in its non-isothermal layout, matching lower temperature regions to areas of lower flux. Full-scale testing of the design will occur in late 2015.
Towards a higher energy efficiency and lower carbon society the European approach and experience
International Nuclear Information System (INIS)
Hein, Klaus R.G.
2010-01-01
The use of natural energy sources and their conversion to secondary forms of energy are a crucial base for the development of our society with its continuous change of requirements due to an increase in population and the broadening of the needs in our modern life. As a consequence the consumption of primary energy resources rose drastically worldwide during the last 5 decades in particular in the industrialized regions such as Europe. Parallel in time the increasing awareness of negative effects of fuel dependent pollution on the environment and the introduction of stringent emission control regulations about 3 decades ago initiated extensive development and retrofit activities resulting in the today applied high level state of the art. As an additional challenge the worldwide debate about the potential effects of the emission of the s.c green house gases on the global climate in particular carbon dioxide from the use of predominantly fossil fuels have initiated in the European Union extensive efforts in efficiency improvements of large scale fuel conversion and more recently in the development of processes for CO 2 capture and its subsequent storage. Furthermore the use of the s.c CO 2 neutral fuels such as biomass and organic wastes as well as the non carbonaceous options water wind and solar power are promoted towards a growing role in a future primary energy mix. In addition the majority of the member states of the European Union support the minimization of energetic losses by the provision of incentives for energy savings and for switching to lower energy consuming alternatives. In fact all the above mentioned instruments have to be applied in order to fulfil the long term requirements e. g. a reduction of the greenhouse gas emission by at least 20% by 2020 and further alleviations for the time beyond. The presentation will start with summarizing the development of the demand and supply of energy in Europe during the recent decades. After an outline of the
Computer-aided modeling framework for efficient model development, analysis and identification
DEFF Research Database (Denmark)
Heitzig, Martina; Sin, Gürkan; Sales Cruz, Mauricio
2011-01-01
Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided...... methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task....... The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main...
Boevé, Anja J.; Meijer, Rob R.; Albers, Casper J.; Beetsma, Yta; Bosker, Roel J.
2015-01-01
The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration. PMID:26641632
Boevé, Anja J; Meijer, Rob R; Albers, Casper J; Beetsma, Yta; Bosker, Roel J
2015-01-01
The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration.
Kangsabanik, Jiban; Sugathan, Vipinraj; Yadav, Anuradha; Yella, Aswani; Alam, Aftab
2018-05-01
Solar energy plays an important role in substituting the ever declining source of fossil fuel energy. Finding novel materials for solar cell applications is an integral part of photovoltaic research. Hybrid lead halide perovskites are one of, if not the most, well sought material in the photovoltaic research community. Its unique intrinsic properties, flexible synthesis techniques, and device fabrication architecture made the community highly buoyant over the past few years. Yet, there are two fundamental issues that still remain a concern, i.e., the stability in external environment and the toxicity due to Pb. This led to a search for alternative materials. More recently, double perovskite [A2B B'X6 (X =Cl, Br, I)] materials have emerged as a promising choice. Few experimental synthesis and high throughput computational studies have been carried out to check for promising candidates of this class. The main outcome from these studies, however, can essentially be summarized into two categories: (i) either they have an indirect band gap or (ii) a direct but large optical band gap, which is not suitable for solar devices. Here we propose a large set of stable double perovskite materials, Cs2B B 'X6 (X =Cl, Br, I), which show indirect to direct band gap transition via small Pb+2 doping at both B and B'sites. This is done by careful band (orbital) engineering using first-principles calculations. This kind of doping has helped to change the topology of the band structure, triggering an indirect to direct transition that is optically allowed. It also reduces the band gap significantly, bringing it well into the visible region. We also simulated the optical absorption spectra of these systems and found a comparable/higher absorption coefficient and solar efficiency with respect to the state of the art photovoltaic absorber material CH3NH3PbI3 . A number of materials Cs2(B0.75Pb0.25) (B0.75'Pb0.25) X6 (for various combinations of B ,B ', and X ) are found to be promising
Efficient scatter model for simulation of ultrasound images from computed tomography data
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
Rodríguez Fonollosa, Javier; Nikias, Chrysostomos L.
1993-01-01
The Wigner higher order moment spectra (WHOS) are defined as extensions of the Wigner-Ville distribution (WD) to higher order moment spectra domains. A general class of time-frequency higher order moment spectra is also defined in terms of arbitrary higher order moments of the signal as generalizations of the Cohen’s general class of time-frequency representations. The properties of the general class of time-frequency higher order moment spectra can be related to the properties...
Influence of studying in higher educational establishment on students’ harmful computer habits
Directory of Open Access Journals (Sweden)
M.D. Kudryavtsev
2016-10-01
Full Text Available Purpose: to determine influence of educational process on prevalence of students’ harmful computer habits. Material: in the research 1st-3rd year students (803 boys and 596 girls participated. All they specialized in discipline Physical culture. The students had no health disorders. Results: it was found that in average students have 2 computer habits everyone. The most probable and dangerous in respect to addicting are habits to use internet and computer games. Student, who has these habits, spends more than 4 hours a day for them. 33% of 1st year boys and 16% of 1st year girls spend more than 2 hours a day for computer games. 15-20 % of boys and 25-30% of year girls waste more than 4 hours a day in internet. 10-15% of boys spend more than 4 hours a day for computer games. It is very probable that these students already have computer games’ addiction. Conclusions: recent time dangerous tendency to watching anime has been appearing. Physical culture faculties and departments shall take additional measures on reduction of students’ computer addictions. Teachers of all disciplines shall organize educational process with the help of electronic resources so that not to provoke progressing of students’ computer habits.
Above-Campus Services: Shaping the Promise of Cloud Computing for Higher Education
Wheeler, Brad; Waggener, Shelton
2009-01-01
The concept of today's cloud computing may date back to 1961, when John McCarthy, retired Stanford professor and Turing Award winner, delivered a speech at MIT's Centennial. In that speech, he predicted that in the future, computing would become a "public utility." Yet for colleges and universities, the recent growth of pervasive, very high speed…
Unified commutation-pruning technique for efficient computation of composite DFTs
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
A strategy for improved computational efficiency of the method of anchored distributions
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Efficient O(N) recursive computation of the operational space inertial matrix
International Nuclear Information System (INIS)
Lilly, K.W.; Orin, D.E.
1993-01-01
The operational space inertia matrix Λ reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix Λ also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute Λ has a computational complexity of O(N 3 ) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing Λ for N ≥ 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base
Robust fault detection of linear systems using a computationally efficient set-membership method
DEFF Research Database (Denmark)
Tabatabaeipour, Mojtaba; Bak, Thomas
2014-01-01
In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....
A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm
Directory of Open Access Journals (Sweden)
Mariana-Eugenia Ilas
2018-03-01
Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.
Directory of Open Access Journals (Sweden)
Sofia D Karamintziou
Full Text Available Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.
Karamintziou, Sofia D; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G; Tagaris, George A; Sakas, Damianos E; Polychronaki, Georgia E; Tsirogiannis, George L; David, Olivier; Nikita, Konstantina S
2017-01-01
Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.
Wendrich, Robert E.; Guerrero, J.E.
2013-01-01
This paper examines the conceptual synthesis processes in conjunction with assistive computational support for individual and collaborative interaction. We present findings from two educational design interaction experiments in product creation processing (PCP). We focus on metacognitive aspects of
Evaluation of the efficiency of computer-aided spectra search systems based on information theory
International Nuclear Information System (INIS)
Schaarschmidt, K.
1979-01-01
Application of information theory allows objective evaluation of the efficiency of computer-aided spectra search systems. For this purpose, a significant number of search processes must be analyzed. The amount of information gained by computer application is considered as the difference between the entropy of the data bank and a conditional entropy depending on the proportion of unsuccessful search processes and ballast. The influence of the following factors can be estimated: volume, structure, and quality of the spectra collection stored, efficiency of the encoding instruction and the comparing algorithm, and subjective errors involved in the encoding of spectra. The relations derived are applied to two published storage and retrieval systems for infared spectra. (Auth.)
A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.
Wehner, M. F.; Oliker, L.; Shalf, J.
2008-12-01
Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.
I/O-Efficient Computation of Water Flow Across a Terrain
DEFF Research Database (Denmark)
Arge, Lars Allan; Revsbæk, Morten; Zeh, Norbert
2010-01-01
). We present an I/O-efficient algorithm that solves this problem using O(sort(X) log (X/M) + sort(N)) I/Os, where N is the number of terrain vertices, X is the number of pits of the terrain, sort(N) is the cost of sorting N data items, and M is the size of the computer's main memory. Our algorithm...
An accurate and computationally efficient small-scale nonlinear FEA of flexible risers
Rahmati, MT; Bahai, H; Alfano, G
2016-01-01
This paper presents a highly efficient small-scale, detailed finite-element modelling method for flexible risers which can be effectively implemented in a fully-nested (FE2) multiscale analysis based on computational homogenisation. By exploiting cyclic symmetry and applying periodic boundary conditions, only a small fraction of a flexible pipe is used for a detailed nonlinear finite-element analysis at the small scale. In this model, using three-dimensional elements, all layer components are...
A comparison of efficient methods for the computation of Born gluon amplitudes
International Nuclear Information System (INIS)
Dinsdale, Michael; Ternick, Marko; Weinzierl, Stefan
2006-01-01
We compare four different methods for the numerical computation of the pure gluonic amplitudes in the Born approximation. We are in particular interested in the efficiency of the various methods as the number n of the external particles increases. In addition we investigate the numerical accuracy in critical phase space regions. The methods considered are based on (i) Berends-Giele recurrence relations, (ii) scalar diagrams, (iii) MHV vertices and (iv) BCF recursion relations
Energy Technology Data Exchange (ETDEWEB)
Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rushdi, Ahmad A. [Univ. of Texas, Austin, TX (United States); Abdelkader, Ahmad [Univ. of Maryland, College Park, MD (United States)
2015-09-01
This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.
International Nuclear Information System (INIS)
Kamalzare, Mahmoud; Johnson, Erik A; Wojtkiewicz, Steven F
2014-01-01
Designing control strategies for smart structures, such as those with semiactive devices, is complicated by the nonlinear nature of the feedback control, secondary clipping control and other additional requirements such as device saturation. The usual design approach resorts to large-scale simulation parameter studies that are computationally expensive. The authors have previously developed an approach for state-feedback semiactive clipped-optimal control design, based on a nonlinear Volterra integral equation that provides for the computationally efficient simulation of such systems. This paper expands the applicability of the approach by demonstrating that it can also be adapted to accommodate more realistic cases when, instead of full state feedback, only a limited set of noisy response measurements is available to the controller. This extension requires incorporating a Kalman filter (KF) estimator, which is linear, into the nominal model of the uncontrolled system. The efficacy of the approach is demonstrated by a numerical study of a 100-degree-of-freedom frame model, excited by a filtered Gaussian random excitation, with noisy acceleration sensor measurements to determine the semiactive control commands. The results show that the proposed method can improve computational efficiency by more than two orders of magnitude relative to a conventional solver, while retaining a comparable level of accuracy. Further, the proposed approach is shown to be similarly efficient for an extensive Monte Carlo simulation to evaluate the effects of sensor noise levels and KF tuning on the accuracy of the response. (paper)
Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.
Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté
2015-12-24
Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits.
An efficient method for computing the absorption of solar radiation by water vapor
Chou, M.-D.; Arking, A.
1981-01-01
Chou and Arking (1980) have developed a fast but accurate method for computing the IR cooling rate due to water vapor. Using a similar approach, the considered investigation develops a method for computing the heating rates due to the absorption of solar radiation by water vapor in the wavelength range from 4 to 8.3 micrometers. The validity of the method is verified by comparison with line-by-line calculations. An outline is provided of an efficient method for transmittance and flux computations based upon actual line parameters. High speed is achieved by employing a one-parameter scaling approximation to convert an inhomogeneous path into an equivalent homogeneous path at suitably chosen reference conditions.
Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher
2017-11-01
Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Fagerberg, Rolf; Mailund, Thomas
2013-01-01
), respectively, and counting how often the induced topologies in the two input trees are different. In this paper we present efficient algorithms for computing these distances. We show how to compute the triplet distance in time O(n log n) and the quartet distance in time O(d n log n), where d is the maximal......The triplet and quartet distances are distance measures to compare two rooted and two unrooted trees, respectively. The leaves of the two trees should have the same set of n labels. The distances are defined by enumerating all subsets of three labels (triplets) and four labels (quartets...... degree of any node in the two trees. Within the same time bounds, our framework also allows us to compute the parameterized triplet and quartet distances, where a parameter is introduced to weight resolved (binary) topologies against unresolved (non-binary) topologies. The previous best algorithm...
Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors
Directory of Open Access Journals (Sweden)
Jongsoo Park
2013-01-01
Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Energy Technology Data Exchange (ETDEWEB)
Park, Won Young; Phadke, Amol; Shah, Nihar [Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States)
2013-08-15
Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that PC monitor efficiency will likely improve by over 40 % by 2015 with saving potential of 4.5 TWh per year in 2015, compared to today's technology. We discuss various energy-efficiency improvement options and evaluate the cost-effectiveness of three of them, at least one of which improves efficiency by at least 20 % cost effectively beyond the ongoing market trends. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus-powered liquid crystal display monitors and find that the current technology available and deployed in them has the potential to deeply and cost effectively reduce energy consumption by as much as 50 %. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to further capture global energy saving potential from PC monitors which we estimate to be 9.2 TWh per year in 2015.
USAGE AND MAGNETIZATION OF CLOUD COMPUTING IN HIGHER STUDIES – RAJASTHAN
Directory of Open Access Journals (Sweden)
Ranjan Upadhyaya
2013-07-01
Full Text Available The Young India is a doorstep of another revolution of Cloud Computing Technology and the whole world adores the true colors of Indian Information revolution in the Global Recession. The India biggest and heavily densely populated country (1.6 Million according 20011 census surveys India comprises of new age aspirants roughly 50% to 60% and out of these only 30% are Cloud Computing savvy. The uphill task lies ahead for the motherland is to train the new breads so that they can get their livelihoods and well connect them to the outer world. The inspiration of late Rajiv Gandhi’s and Prof Yashpal dream is propagating into the reality but still more work is mingled up. The submergence of the Cloud Computing revolution is taking its all time cost and bring a lot more changes which was never expected or though off in our India. Cloud computing the ladder for success for the uncultivated breeds in our nation. The nation is marching ahead with the Sculpture of ubiquitous Cloud Computing in this liberalization, privatization and globalization era.
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
Research of z-axis geometric dose efficiency in multi-detector computed tomography
International Nuclear Information System (INIS)
Kim, You Hyun; Kim, Moon Chan
2006-01-01
With the recent prevalence of helical CT and multi-slice CT, which deliver higher radiation dose than conventional CT due to overbeaming effect in X-ray exposure and interpolation technique in image reconstruction. Although multi-detector and helical CT scanner provide a variety of opportunities for patient dose reduction, the potential risk for high radiation levels in CT examination can't be overemphasized in spite of acquiring more diagnostic information. So much more concerns is necessary about dose characteristics of CT scanner, especially dose efficient design as well as dose modulation software, because dose efficiency built into the scanner's design is probably the most important aspect of successful low dose clinical performance. This study was conducted to evaluate z-axis geometric dose efficiency in single detector CT and each level multi-detector CT, as well as to compare z-axis dose efficiency with change of technical scan parameters such as focal spot size of tube, beam collimation, detector combination, scan mode, pitch size, slice width and interval. The results obtained were as follows; 1. SDCT was most highest and 4 MDCT was most lowest in z-axis geometric dose efficiency among SDCT, 4, 8, 16, 64 slice MDCT made by GE manufacture. 2. Small focal spot was 0.67-13.62% higher than large focal spot in z-axis geometric dose efficiency at MDCT. 3. Large beam collimation was 3.13-51.52% higher than small beam collimation in z-axis geometric dose efficiency at MDCT. Z-axis geometric dose efficiency was same at 4 slice MDCT in all condition and 8 slice MDCT of large beam collimation with change of detector combination, but was changed irregularly at 8 slice MDCT of small beam collimation and 16 slice MDCT in all condition with change of detector combination. 5. There was no significant difference for z-axis geometric dose efficiency between conventional scan and helical scan, and with change of pitch factor, as well as change of slice width or interval for
SEDRX: A computer program for the simulation Si(Li) and Ge(Hp) x-ray detectors efficiency
International Nuclear Information System (INIS)
Benamar, M.A.; Benouali, A.; Tchantchane, A.; Azbouche, A.; Tobbeche, S. Centre de Developpement des Techniques Nucleaires, Algiers; Labo. des Techniques Nucleaires)
1992-12-01
The difficulties encountered in measuring the x-ray detectors efficiency has motivated to develop a computer program to simulate this parameter. this program computes the efficiency of detectors as a function of energy. the computation of this parameter is based on the fitting coefficients of absorption in the case of photoelectric, coherent and incoherent factors. These coefficients are given by Mc Master library or may be determined by the interpolation based on cubic splines
Popov, Vitaliy; Biemans, Harm J. A.; Kuznetsov, Andrei N.; Mulder, Martin
2014-01-01
In this exploratory study, the authors introduced an interculturally enriched collaboration script (IECS) for working in culturally diverse groups within a computer-supported collaborative learning (CSCL) environment and then assessed student online collaborative behaviour, learning performance and experiences. The question was if and how these…
Collaborative and Competitive Video Games for Teaching Computing in Higher Education
Smith, Spencer; Chan, Samantha
2017-01-01
This study measures the success of using a collaborative and competitive video game, named Space Race, to teach computing to first year engineering students. Space Race is played by teams of four, each with their own tablet, collaborating to compete against the other teams in the class. The impact of the game on student learning was studied…
Kay, Robin H.; Lauricella, Sharon
2011-01-01
Because of decreased prices, increased convenience, and wireless access, an increasing number of college and university students are using laptop computers in their classrooms. This recent trend has forced instructors to address the educational consequences of using these mobile devices. The purpose of the current study was to analyze and assess…
Gaussian Radial Basis Function for Efficient Computation of Forest Indirect Illumination
Abbas, Fayçal; Babahenini, Mohamed Chaouki
2018-06-01
Global illumination of natural scenes in real time like forests is one of the most complex problems to solve, because the multiple inter-reflections between the light and material of the objects composing the scene. The major problem that arises is the problem of visibility computation. In fact, the computing of visibility is carried out for all the set of leaves visible from the center of a given leaf, given the enormous number of leaves present in a tree, this computation performed for each leaf of the tree which also reduces performance. We describe a new approach that approximates visibility queries, which precede in two steps. The first step is to generate point cloud representing the foliage. We assume that the point cloud is composed of two classes (visible, not-visible) non-linearly separable. The second step is to perform a point cloud classification by applying the Gaussian radial basis function, which measures the similarity in term of distance between each leaf and a landmark leaf. It allows approximating the visibility requests to extract the leaves that will be used to calculate the amount of indirect illumination exchanged between neighbor leaves. Our approach allows efficiently treat the light exchanges in the scene of a forest, it allows a fast computation and produces images of good visual quality, all this takes advantage of the immense power of computation of the GPU.
Energy Technology Data Exchange (ETDEWEB)
Adly, A.A., E-mail: adlyamr@gmail.com [Electrical Power and Machines Dept., Faculty of Engineering, Cairo University, Giza 12613 (Egypt); Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12613 (Egypt)
2017-07-15
Highlights: • An approach to simulate hysteresis while taking shape anisotropy into consideration. • Utilizing the ensemble of triangular sub-regions hysteresis models in field computation. • A novel tool capable of carrying out field computation while keeping track of hysteresis losses. • The approach may be extended for 3D tetra-hedra sub-volumes. - Abstract: Field computation in media exhibiting hysteresis is crucial to a variety of applications such as magnetic recording processes and accurate determination of core losses in power devices. Recently, Hopfield neural networks (HNN) have been successfully configured to construct scalar and vector hysteresis models. This paper presents an efficient hysteresis modeling methodology and its implementation in field computation applications. The methodology is based on the application of the integral equation approach on discretized triangular magnetic sub-regions. Within every triangular sub-region, hysteresis properties are realized using a 3-node HNN. Details of the approach and sample computation results are given in the paper.
Directory of Open Access Journals (Sweden)
Jianfei Zhang
2013-01-01
Full Text Available Graphics processing unit (GPU has obtained great success in scientific computations for its tremendous computational horsepower and very high memory bandwidth. This paper discusses the efficient way to implement polynomial preconditioned conjugate gradient solver for the finite element computation of elasticity on NVIDIA GPUs using compute unified device architecture (CUDA. Sliced block ELLPACK (SBELL format is introduced to store sparse matrix arising from finite element discretization of elasticity with fewer padding zeros than traditional ELLPACK-based formats. Polynomial preconditioning methods have been investigated both in convergence and running time. From the overall performance, the least-squares (L-S polynomial method is chosen as a preconditioner in PCG solver to finite element equations derived from elasticity for its best results on different example meshes. In the PCG solver, mixed precision algorithm is used not only to reduce the overall computational, storage requirements and bandwidth but to make full use of the capacity of the GPU devices. With SBELL format and mixed precision algorithm, the GPU-based L-S preconditioned CG can get a speedup of about 7–9 to CPU-implementation.
Energy Technology Data Exchange (ETDEWEB)
Park, Won Young [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Phadke, Amol [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shah, Nihar [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-06-29
Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to today’s technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.
Counting loop diagrams: computational complexity of higher-order amplitude evaluation
International Nuclear Information System (INIS)
Eijk, E. van; Kleiss, R.; Lazopoulos, A.
2004-01-01
We discuss the computational complexity of the perturbative evaluation of scattering amplitudes, both by the Caravaglios-Moretti algorithm and by direct evaluation of the individual diagrams. For a self-interacting scalar theory, we determine the complexity as a function of the number of external legs. We describe a method for obtaining the number of topologically inequivalent Feynman graphs containing closed loops, and apply this to 1- and 2-loop amplitudes. We also compute the number of graphs weighted by their symmetry factors, thus arriving at exact and asymptotic estimates for the average symmetry factor of diagrams. We present results for the asymptotic number of diagrams up to 10 loops, and prove that the average symmetry factor approaches unity as the number of external legs becomes large. (orig.)
Sampling efficiency of modified 37-mm sampling cassettes using computational fluid dynamics.
Anthony, T Renée; Sleeth, Darrah; Volckens, John
2016-01-01
In the U.S., most industrial hygiene practitioners continue to rely on the closed-face cassette (CFC) to assess worker exposures to hazardous dusts, primarily because ease of use, cost, and familiarity. However, mass concentrations measured with this classic sampler underestimate exposures to larger particles throughout the inhalable particulate mass (IPM) size range (up to aerodynamic diameters of 100 μm). To investigate whether the current 37-mm inlet cap can be redesigned to better meet the IPM sampling criterion, computational fluid dynamics (CFD) models were developed, and particle sampling efficiencies associated with various modifications to the CFC inlet cap were determined. Simulations of fluid flow (standard k-epsilon turbulent model) and particle transport (laminar trajectories, 1-116 μm) were conducted using sampling flow rates of 10 L min(-1) in slow moving air (0.2 m s(-1)) in the facing-the-wind orientation. Combinations of seven inlet shapes and three inlet diameters were evaluated as candidates to replace the current 37-mm inlet cap. For a given inlet geometry, differences in sampler efficiency between inlet diameters averaged less than 1% for particles through 100 μm, but the largest opening was found to increase the efficiency for the 116 μm particles by 14% for the flat inlet cap. A substantial reduction in sampler efficiency was identified for sampler inlets with side walls extending beyond the dimension of the external lip of the current 37-mm CFC. The inlet cap based on the 37-mm CFC dimensions with an expanded 15-mm entry provided the best agreement with facing-the-wind human aspiration efficiency. The sampler efficiency was increased with a flat entry or with a thin central lip adjacent to the new enlarged entry. This work provides a substantial body of sampling efficiency estimates as a function of particle size and inlet geometry for personal aerosol samplers.
Energy Technology Data Exchange (ETDEWEB)
Abhyankar, Shrirang [Argonne National Lab. (ANL), Argonne, IL (United States); Anitescu, Mihai [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil [Argonne National Lab. (ANL), Argonne, IL (United States); Zhang, Hong [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-03-31
Sensitivity analysis is an important tool to describe power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this work, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating trajectory sensitivities of larger systems and is consistent, within machine precision, with the function whose sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as DC exciters, by deriving and implementing the adjoint jump conditions that arise from state and time-dependent discontinuities. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach.
An efficient and general numerical method to compute steady uniform vortices
Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.
2011-07-01
Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.
Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations
Southern, J.A.; Plank, G.; Vigmond, E.J.; Whiteley, J.P.
2009-01-01
The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level, the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time while still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study, the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counterintuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks, it is shown that the coupled method is up to 80% faster than the conventional uncoupled method-and that parallel performance is better for the larger coupled problem.
Huang, Qinlong; Yang, Yixian; Shi, Yuxiang
2018-02-24
With the growing number of vehicles and popularity of various services in vehicular cloud computing (VCC), message exchanging among vehicles under traffic conditions and in emergency situations is one of the most pressing demands, and has attracted significant attention. However, it is an important challenge to authenticate the legitimate sources of broadcast messages and achieve fine-grained message access control. In this work, we propose SmartVeh, a secure and efficient message access control and authentication scheme in VCC. A hierarchical, attribute-based encryption technique is utilized to achieve fine-grained and flexible message sharing, which ensures that vehicles whose persistent or dynamic attributes satisfy the access policies can access the broadcast message with equipped on-board units (OBUs). Message authentication is enforced by integrating an attribute-based signature, which achieves message authentication and maintains the anonymity of the vehicles. In order to reduce the computations of the OBUs in the vehicles, we outsource the heavy computations of encryption, decryption and signing to a cloud server and road-side units. The theoretical analysis and simulation results reveal that our secure and efficient scheme is suitable for VCC.
Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations
Southern, J.A.
2009-10-01
The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level, the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time while still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study, the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counterintuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks, it is shown that the coupled method is up to 80% faster than the conventional uncoupled method-and that parallel performance is better for the larger coupled problem.
An Efficient and Secure m-IPS Scheme of Mobile Devices for Human-Centric Computing
Directory of Open Access Journals (Sweden)
Young-Sik Jeong
2014-01-01
Full Text Available Recent rapid developments in wireless and mobile IT technologies have led to their application in many real-life areas, such as disasters, home networks, mobile social networks, medical services, industry, schools, and the military. Business/work environments have become wire/wireless, integrated with wireless networks. Although the increase in the use of mobile devices that can use wireless networks increases work efficiency and provides greater convenience, wireless access to networks represents a security threat. Currently, wireless intrusion prevention systems (IPSs are used to prevent wireless security threats. However, these are not an ideal security measure for businesses that utilize mobile devices because they do not take account of temporal-spatial and role information factors. Therefore, in this paper, an efficient and secure mobile-IPS (m-IPS is proposed for businesses utilizing mobile devices in mobile environments for human-centric computing. The m-IPS system incorporates temporal-spatial awareness in human-centric computing with various mobile devices and checks users’ temporal spatial information, profiles, and role information to provide precise access control. And it also can extend application of m-IPS to the Internet of things (IoT, which is one of the important advanced technologies for supporting human-centric computing environment completely, for real ubiquitous field with mobile devices.
Efficient frequent pattern mining algorithm based on node sets in cloud computing environment
Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.
2017-11-01
The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.
International Nuclear Information System (INIS)
Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.
2011-01-01
This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
Sharpe, Richard A; Thornton, Christopher R; Nikolaou, Vasilis; Osborne, Nicholas J
2015-02-01
The United Kingdom (UK) has one of the highest prevalence of asthma in the world, which represents a significant economic and societal burden. Reduced ventilation resulting from increased energy efficiency measures acts as a modifier for mould contamination and risk of allergic diseases. To our knowledge no previous study has combined detailed asset management property and health data together to assess the impact of household energy efficiency (using the UK Government's Standard Assessment Procedure) on asthma outcomes in an adult population residing in social housing. Postal questionnaires were sent to 3867 social housing properties to collect demographic, health and environmental information on all occupants. Detailed property data, residency periods, indices of multiple deprivation (IMD) and household energy efficiency ratings were also investigated. Logistic regression was used to calculate odds ratios and confidence intervals while allowing for clustering of individuals coming from the same location. Eighteen percent of our target social housing population were recruited into our study. Adults had a mean age of 59 (SD±17.3) years and there was a higher percentage of female (59%) and single occupancy (58%) respondents. Housing demographic characteristics were representative of the target homes. A unit increase in household Standard Assessment Procedure (SAP) rating was associated with a 2% increased risk of current asthma, with the greatest risk in homes with SAP >71. We assessed exposure to mould and found that the presence of a mouldy/musty odour was associated with a two-fold increased risk of asthma (OR 2.2 95%; CI 1.3-3.8). A unit increase in SAP led to a 4-5% reduction in the risk of visible mould growth and a mouldy/musty odour. In contrast to previous research, we report that residing in energy efficient homes may increase the risk of adult asthma. We report that mould contamination increased the risk of asthma, which is in agreement with existing
International Nuclear Information System (INIS)
Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie
2012-01-01
Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.
Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer
2014-11-15
Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test-a score test-with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene-gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test-up to 23 more associations-whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene-gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. heckerma@microsoft.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Efficient quantum computation in a network with probabilistic gates and logical encoding
DEFF Research Database (Denmark)
Borregaard, J.; Sørensen, A. S.; Cirac, J. I.
2017-01-01
An approach to efficient quantum computation with probabilistic gates is proposed and analyzed in both a local and nonlocal setting. It combines heralded gates previously studied for atom or atomlike qubits with logical encoding from linear optical quantum computation in order to perform high......-fidelity quantum gates across a quantum network. The error-detecting properties of the heralded operations ensure high fidelity while the encoding makes it possible to correct for failed attempts such that deterministic and high-quality gates can be achieved. Importantly, this is robust to photon loss, which...... is typically the main obstacle to photonic-based quantum information processing. Overall this approach opens a path toward quantum networks with atomic nodes and photonic links....
Asymptotic optimality and efficient computation of the leave-subject-out cross-validation
Xu, Ganggang
2012-12-01
Although the leave-subject-out cross-validation (CV) has been widely used in practice for tuning parameter selection for various nonparametric and semiparametric models of longitudinal data, its theoretical property is unknown and solving the associated optimization problem is computationally expensive, especially when there are multiple tuning parameters. In this paper, by focusing on the penalized spline method, we show that the leave-subject-out CV is optimal in the sense that it is asymptotically equivalent to the empirical squared error loss function minimization. An efficient Newton-type algorithm is developed to compute the penalty parameters that optimize the CV criterion. Simulated and real data are used to demonstrate the effectiveness of the leave-subject-out CV in selecting both the penalty parameters and the working correlation matrix. © 2012 Institute of Mathematical Statistics.
A network of spiking neurons for computing sparse representations in an energy-efficient way.
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B
2012-11-01
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Beck, Jeffrey; Bos, Jeremy P.
2017-05-01
We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.
Asymptotic optimality and efficient computation of the leave-subject-out cross-validation
Xu, Ganggang; Huang, Jianhua Z.
2012-01-01
Although the leave-subject-out cross-validation (CV) has been widely used in practice for tuning parameter selection for various nonparametric and semiparametric models of longitudinal data, its theoretical property is unknown and solving the associated optimization problem is computationally expensive, especially when there are multiple tuning parameters. In this paper, by focusing on the penalized spline method, we show that the leave-subject-out CV is optimal in the sense that it is asymptotically equivalent to the empirical squared error loss function minimization. An efficient Newton-type algorithm is developed to compute the penalty parameters that optimize the CV criterion. Simulated and real data are used to demonstrate the effectiveness of the leave-subject-out CV in selecting both the penalty parameters and the working correlation matrix. © 2012 Institute of Mathematical Statistics.
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle
Directory of Open Access Journals (Sweden)
Junpeng Shi
2017-02-01
Full Text Available In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS method for two-dimensional direction of arrival (2D DOA estimation with uniform rectangular arrays (URAs in a low-grazing angle (LGA condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.
Computationally efficient method for optical simulation of solar cells and their applications
Semenikhin, I.; Zanuccoli, M.; Fiegna, C.; Vyurkov, V.; Sangiorgi, E.
2013-01-01
This paper presents two novel implementations of the Differential method to solve the Maxwell equations in nanostructured optoelectronic solid state devices. The first proposed implementation is based on an improved and computationally efficient T-matrix formulation that adopts multiple-precision arithmetic to tackle the numerical instability problem which arises due to evanescent modes. The second implementation adopts the iterative approach that allows to achieve low computational complexity O(N logN) or better. The proposed algorithms may work with structures with arbitrary spatial variation of the permittivity. The developed two-dimensional numerical simulator is applied to analyze the dependence of the absorption characteristics of a thin silicon slab on the morphology of the front interface and on the angle of incidence of the radiation with respect to the device surface.
Directory of Open Access Journals (Sweden)
Heng-Yi Su
2016-11-01
Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.
Computationally efficient SVM multi-class image recognition with confidence measures
International Nuclear Information System (INIS)
Makili, Lazaro; Vega, Jesus; Dormido-Canto, Sebastian; Pastor, Ignacio; Murari, Andrea
2011-01-01
Typically, machine learning methods produce non-qualified estimates, i.e. the accuracy and reliability of the predictions are not provided. Transductive predictors are very recent classifiers able to provide, simultaneously with the prediction, a couple of values (confidence and credibility) to reflect the quality of the prediction. Usually, a drawback of the transductive techniques for huge datasets and large dimensionality is the high computational time. To overcome this issue, a more efficient classifier has been used in a multi-class image classification problem in the TJ-II stellarator database. It is based on the creation of a hash function to generate several 'one versus the rest' classifiers for every class. By using Support Vector Machines as the underlying classifier, a comparison between the pure transductive approach and the new method has been performed. In both cases, the success rates are high and the computation time with the new method is up to 0.4 times the old one.
Sengupta, Abhronil; Roy, Kaushik
2017-12-01
Present day computers expend orders of magnitude more computational resources to perform various cognitive and perception related tasks that humans routinely perform every day. This has recently resulted in a seismic shift in the field of computation where research efforts are being directed to develop a neurocomputer that attempts to mimic the human brain by nanoelectronic components and thereby harness its efficiency in recognition problems. Bridging the gap between neuroscience and nanoelectronics, this paper attempts to provide a review of the recent developments in the field of spintronic device based neuromorphic computing. Description of various spin-transfer torque mechanisms that can be potentially utilized for realizing device structures mimicking neural and synaptic functionalities is provided. A cross-layer perspective extending from the device to the circuit and system level is presented to envision the design of an All-Spin neuromorphic processor enabled with on-chip learning functionalities. Device-circuit-algorithm co-simulation framework calibrated to experimental results suggest that such All-Spin neuromorphic systems can potentially achieve almost two orders of magnitude energy improvement in comparison to state-of-the-art CMOS implementations.
Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks
Directory of Open Access Journals (Sweden)
Hui-Ping Chen
2016-11-01
Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.
A-VCI: A flexible method to efficiently compute vibrational spectra
Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2017-06-01
The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm-1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm-1 is the most accurate computation that exists today on such systems.
Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations
Dokuz, A. S.; Celik, M.
2017-11-01
Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.
Improving the computation efficiency of COBRA-TF for LWR safety analysis of large problems
International Nuclear Information System (INIS)
Cuervo, D.; Avramova, M. N.; Ivanov, K. N.
2004-01-01
A matrix solver is implemented in COBRA-TF in order to improve the computation efficiency of both numerical solution methods existing in the code, the Gauss elimination and the Gauss-Seidel iterative technique. Both methods are used to solve the system of pressure linear equations and relay on the solution of large sparse matrices. The introduced solver accelerates the solution of these matrices in cases of large number of cells. The execution time is reduced in half as compared to the execution time without using matrix solver for the cases with large matrices. The achieved improvement and the planned future work in this direction are important for performing efficient LWR safety analyses of large problems. (authors)
Ivanov, Mikhail V; Babikov, Dmitri
2012-05-14
Efficient method is proposed for computing thermal rate constant of recombination reaction that proceeds according to the energy transfer mechanism, when an energized molecule is formed from reactants first, and is stabilized later by collision with quencher. The mixed quantum-classical theory for the collisional energy transfer and the ro-vibrational energy flow [M. Ivanov and D. Babikov, J. Chem. Phys. 134, 144107 (2011)] is employed to treat the dynamics of molecule + quencher collision. Efficiency is achieved by sampling simultaneously (i) the thermal collision energy, (ii) the impact parameter, and (iii) the incident direction of quencher, as well as (iv) the rotational state of energized molecule. This approach is applied to calculate third-order rate constant of the recombination reaction that forms the (16)O(18)O(16)O isotopomer of ozone. Comparison of the predicted rate vs. experimental result is presented.
FAST SS-ILM: A COMPUTATIONALLY EFFICIENT ALGORITHM TO DISCOVER SOCIALLY IMPORTANT LOCATIONS
Directory of Open Access Journals (Sweden)
A. S. Dokuz
2017-11-01
Full Text Available Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.
An Efficient Integer Coding and Computing Method for Multiscale Time Segment
Directory of Open Access Journals (Sweden)
TONG Xiaochong
2016-12-01
Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.
DEFF Research Database (Denmark)
Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar
2012-01-01
An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Thomsen, Per Grove; Madsen, Henrik
2007-01-01
for nonlinear stochastic continuous-discrete time systems is more than two orders of magnitude faster than a conventional implementation. This is of significance in nonlinear model predictive control applications, statistical process monitoring as well as grey-box modelling of systems described by stochastic......We present a novel numerically robust and computationally efficient extended Kalman filter for state estimation in nonlinear continuous-discrete stochastic systems. The resulting differential equations for the mean-covariance evolution of the nonlinear stochastic continuous-discrete time systems...
Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.
2017-12-01
Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.
Collaborative and Competitive Video Games for Teaching Computing in Higher Education
Smith, Spencer; Chan, Samantha
2017-08-01
This study measures the success of using a collaborative and competitive video game, named Space Race, to teach computing to first year engineering students. Space Race is played by teams of four, each with their own tablet, collaborating to compete against the other teams in the class. The impact of the game on student learning was studied through measurements using 485 students, over one term. Surveys were used to gauge student reception of the game. Pre and post-tests, and in-course examinations were used to quantify student performance. The game was well received with at least 82% of the students that played it recommending it to others. In some cases, game participants outperformed non-participants on course exams. On the final course exam, all of the statistically significant ( pgame participants on the questions, with a maximum grade improvement of 41%. The findings also suggest that some students retain the knowledge obtained from Space Race for at least 7 weeks. The results of this study provide strong evidence that a collaborative and competitive video game can be an effective tool for teaching computing in post-secondary education.
PVT: an efficient computational procedure to speed up next-generation sequence analysis.
Maji, Ranjan Kumar; Sarkar, Arijita; Khatua, Sunirmal; Dasgupta, Subhasis; Ghosh, Zhumur
2014-06-04
High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat's serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during 'spliced alignment' and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an
Does computer-aided surgical simulation improve efficiency in bimaxillary orthognathic surgery?
Schwartz, H C
2014-05-01
The purpose of this study was to compare the efficiency of bimaxillary orthognathic surgery using computer-aided surgical simulation (CASS), with cases planned using traditional methods. Total doctor time was used to measure efficiency. While costs vary widely in different localities and in different health schemes, time is a valuable and limited resource everywhere. For this reason, total doctor time is a more useful measure of efficiency than is cost. Even though we use CASS primarily for planning more complex cases at the present time, this study showed an average saving of 60min for each case. In the context of a department that performs 200 bimaxillary cases each year, this would represent a saving of 25 days of doctor time, if applied to every case. It is concluded that CASS offers great potential for improving efficiency when used in the planning of bimaxillary orthognathic surgery. It saves significant doctor time that can be applied to additional surgical work. Copyright © 2013 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Siegrist, M.; Stahl, S.; Ganz, J.
2010-06-15
This final report for the Swiss Federal Office of Energy (SFOE) takes a look at how a modified refrigerator can be given a higher efficiency by modifying the compressor. The modified refrigerator was fitted with a variable-speed compressor. This compressor could be run at much lower speeds so that it was in operation for up to 90% of the time. It was shown that less electricity was consumed the more the compressor ran. The report discusses the aims of the work and presents details on the standard refrigerator used for the tests. The compressor normally used and the variable-speed compressor used in the test are described. Systems for temperature control and data acquisition during the tests are described. The results obtained are examined and the influence of various factors is discussed.
Directory of Open Access Journals (Sweden)
Francisco Geraldo Barbosa
2015-12-01
Full Text Available Intermolecular forces are a useful concept that can explain the attraction between particulate matter as well as numerous phenomena in our lives such as viscosity, solubility, drug interactions, and dyeing of fibers. However, studies show that students have difficulty understanding this important concept, which has led us to develop a free educational software in English and Portuguese. The software can be used interactively by teachers and students, thus facilitating better understanding. Professors and students, both graduate and undergraduate, were questioned about the software quality and its intuitiveness of use, facility of navigation, and pedagogical application using a Likert scale. The results led to the conclusion that the developed computer application can be characterized as an auxiliary tool to assist teachers in their lectures and students in their learning process of intermolecular forces.
Computational Study on the Effect of Shroud Shape on the Efficiency of the Gas Turbine Stage
Afanas'ev, I. V.; Granovskii, A. V.
2018-03-01
The last stages of powerful power gas turbines play an important role in the development of power and efficiency of the whole unit as well as in the distribution of the flow parameters behind the last stage, which determines the efficient operation of the exhaust diffusers. Therefore, much attention is paid to improving the efficiency of the last stages of gas turbines as well as the distribution of flow parameters. Since the long blades of the last stages of multistage high-power gas turbines could fall into the resonance frequency range in the course of operation, which results in the destruction of the blades, damping wires or damping bolts are used for turning out of resonance frequencies. However, these damping elements cause additional energy losses leading to a reduction in the efficiency of the stage. To minimize these losses, dampening shrouds are used instead of wires and bolts at the periphery of the working blades. However, because of the strength problems, designers have to use, instead of the most efficient full shrouds, partial shrouds that do not provide for significantly reducing the losses in the tip clearance between the blade and the turbine housing. In this paper, a computational study is performed concerning an effect that the design of the shroud of the turbine-working blade exerted on the flow structure in the vicinity of the shroud and on the efficiency of the stage as a whole. The analysis of the flow structure has shown that a significant part of the losses under using the shrouds is associated with the formation of vortex zones in the cavities on the turbine housing before the shrouds, between the ribs of the shrouds, and in the cavities at the outlet behind the shrouds. All the investigated variants of a partial shrouding are inferior in efficiency to the stages with shrouds that completely cover the tip section of the working blade. The stage with a unshrouded working blade was most efficient at the values of the relative tip clearance
Increasing the computational efficient of digital cross correlation by a vectorization method
Chang, Ching-Yuan; Ma, Chien-Ching
2017-08-01
This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.
Errors in measuring absorbed radiation and computing crop radiation use efficiency
International Nuclear Information System (INIS)
Gallo, K.P.; Daughtry, C.S.T.; Wiegand, C.L.
1993-01-01
Radiation use efficiency (RUE) is often a crucial component of crop growth models that relate dry matter production to energy received by the crop. RUE is a ratio that has units g J -1 , if defined as phytomass per unit of energy received, and units J J -1 , if defined as the energy content of phytomass per unit of energy received. Both the numerator and denominator in computation of RUE can vary with experimental assumptions and methodologies. The objectives of this study were to examine the effect that different methods of measuring the numerator and denominator have on the RUE of corn (Zea mays L.) and to illustrate this variation with experimental data. Computational methods examined included (i) direct measurements of the fraction of photosynthetically active radiation absorbed (f A ), (ii) estimates of f A derived from leaf area index (LAI), and (iii) estimates of f A derived from spectral vegetation indices. Direct measurements of absorbed PAR from planting to physiological maturity of corn were consistently greater than the indirect estimates based on green LAI or the spectral vegetation indices. Consequently, the RUE calculated using directly measured absorbed PAR was lower than the RUE calculated using the indirect measures of absorbed PAR. For crops that contain senesced vegetation, green LAI and the spectral vegetation indices provide appropriate estimates of the fraction of PAR absorbed by a crop canopy and, thus, accurate estimates of crop radiation use efficiency
On the Design of Energy-Efficient Location Tracking Mechanism in Location-Aware Computing
Directory of Open Access Journals (Sweden)
MoonBae Song
2005-01-01
Full Text Available The battery, in contrast to other hardware, is not governed by Moore's Law. In location-aware computing, power is a very limited resource. As a consequence, recently, a number of promising techniques in various layers have been proposed to reduce the energy consumption. The paper considers the problem of minimizing the energy used to track the location of mobile user over a wireless link in mobile computing. Energy-efficient location update protocol can be done by reducing the number of location update messages as possible and switching off as long as possible. This can be achieved by the concept of mobility-awareness we propose. For this purpose, this paper proposes a novel mobility model, called state-based mobility model (SMM to provide more generalized framework for both describing the mobility and updating location information of complexly moving objects. We also introduce the state-based location update protocol (SLUP based on this mobility model. An extensive experiment on various synthetic datasets shows that the proposed method improves the energy efficiency by 2 ∼ 3 times with the additional 10% of imprecision cost.
Directory of Open Access Journals (Sweden)
Shaat Musbah
2010-01-01
Full Text Available Cognitive Radio (CR systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.
National Aeronautics and Space Administration — This task is to develop and demonstrate a path-to-flight and power-adaptive avionics technology PEAC (Power Efficient Adaptive Computing). PEAC will enable emerging...
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the
Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms
Directory of Open Access Journals (Sweden)
Noor M. Khan
2017-01-01
Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.
Energy Technology Data Exchange (ETDEWEB)
Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Epifanovsky, Evgeny [Q-Chem, Inc., Pleasanton, CA (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Krylov, Anna I. [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Chemistry
2016-07-26
Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts to extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Directory of Open Access Journals (Sweden)
Shoaib Ehsan
2015-07-01
Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Low-cost, high-performance and efficiency computational photometer design
Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly
2014-05-01
Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.
A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls
Directory of Open Access Journals (Sweden)
Arun Arjunan
2015-08-01
Full Text Available Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consisting of several millions of nodes and elements. Therefore, efficient meshing procedures are necessary to obtain better solution times and to effectively utilise computational resources. Such models should also demonstrate effective Fluid-Structure Interaction (FSI along with acoustic-fluid coupling to simulate a realistic scenario. In this contribution, the development of a finite element frequency-dependent mesh model that can characterize the sound insulation of metal-framed walls is presented. Preliminary results on the application of the proposed model to study the geometric contribution of stud frames on the overall acoustic performance of metal-framed walls are also presented. It is considered that the presented numerical model can be used to effectively visualize the noise behaviour of advanced materials and multi-material structures.
Energy Technology Data Exchange (ETDEWEB)
Hu, Rui, E-mail: rhu@anl.gov; Yu, Yiqi
2016-11-15
Highlights: • Developed a computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors. • Applied fully-coupled JFNK solution scheme to avoid the operator-splitting errors. • The accuracy and efficiency of the method is confirmed with a 7-assembly test problem. • The effects of different spatial discretization schemes are investigated and compared to the RANS-based CFD simulations. - Abstract: For efficient and accurate temperature predictions of sodium fast reactor structures, a 3-D full-core conjugate heat transfer modeling capability is developed for an advanced system analysis tool, SAM. The hexagon lattice core is modeled with 1-D parallel channels representing the subassembly flow, and 2-D duct walls and inter-assembly gaps. The six sides of the hexagon duct wall and near-wall coolant region are modeled separately to account for different temperatures and heat transfer between coolant flow and each side of the duct wall. The Jacobian Free Newton Krylov (JFNK) solution method is applied to solve the fluid and solid field simultaneously in a fully coupled fashion. The 3-D full-core conjugate heat transfer modeling capability in SAM has been demonstrated by a verification test problem with 7 fuel assemblies in a hexagon lattice layout. Additionally, the SAM simulation results are compared with RANS-based CFD simulations. Very good agreements have been achieved between the results of the two approaches.
Directory of Open Access Journals (Sweden)
Hyun-Woo Kim
2015-06-01
Full Text Available Following the rapid growth of ubiquitous computing, many jobs that were previously manual have now been automated. This automation has increased the amount of time available for leisure; diverse services are now being developed for this leisure time. In addition, the development of small and portable devices like smartphones, diverse Internet services can be used regardless of time and place. Studies regarding diverse virtualization are currently in progress. These studies aim to determine ways to efficiently store and process the big data generated by the multitude of devices and services in use. One topic of such studies is desktop storage virtualization, which integrates distributed desktop resources and provides these resources to users to integrate into distributed legacy desktops via virtualization. In the case of desktop storage virtualization, high availability of virtualization is necessary and important for providing reliability to users. Studies regarding hierarchical structures and resource integration are currently in progress. These studies aim to create efficient data distribution and storage for distributed desktops based on resource integration environments. However, studies regarding efficient responses to server faults occurring in desktop-based resource integration environments have been insufficient. This paper proposes a mechanism for the sustainable operation of desktop storage (SODS for high operational availability. It allows for the easy addition and removal of desktops in desktop-based integration environments. It also activates alternative servers when a fault occurs within a system.
M. Kasemann
Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...
Directory of Open Access Journals (Sweden)
Bryan Howell
Full Text Available Spinal cord stimulation (SCS is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically increased stimulation efficiency and reduced the power required to stimulate the dorsal columns by more than 90%. Intradural placement also increased selectivity, allowing activation of a greater proportion of dorsal column fibers before spread of activation to dorsal root fibers, as well as more selective activation of individual dermatomes at different lateral deviations from the midline. Further, the results suggest that current electrode designs used for extradural SCS are not optimal for intradural SCS, and a novel azimuthal tripolar design increased stimulation selectivity, even beyond that achieved with an intradural paddle array. Increased stimulation efficiency is expected to increase the battery life of implantable pulse generators, increase the recharge interval of rechargeable implantable pulse generators, and potentially reduce stimulator volume. The greater selectivity of intradural stimulation may improve the success rate of SCS by mitigating the sensitivity of pain relief to malpositioning of the electrode. The outcome of this effort is a better quantitative understanding of how intradural electrode placement can potentially increase the selectivity and efficiency of SCS
Babbitt, Callie W; Kahhat, Ramzy; Williams, Eric; Babbitt, Gregory A
2009-07-01
Product lifespan is a fundamental variable in understanding the environmental impacts associated with the life cycle of products. Existing life cycle and materials flow studies of products, almost without exception, consider lifespan to be constant over time. To determine the validity of this assumption, this study provides an empirical documentation of the long-term evolution of personal computer lifespan, using a major U.S. university as a case study. Results indicate that over the period 1985-2000, computer lifespan (purchase to "disposal") decreased steadily from a mean of 10.7 years in 1985 to 5.5 years in 2000. The distribution of lifespan also evolved, becoming narrower over time. Overall, however, lifespan distribution was broader than normally considered in life cycle assessments or materials flow forecasts of electronic waste management for policy. We argue that these results suggest that at least for computers, the assumption of constant lifespan is problematic and that it is important to work toward understanding the dynamics of use patterns. We modify an age-structured model of population dynamics from biology as a modeling approach to describe product life cycles. Lastly, the purchase share and generation of obsolete computers from the higher education sector is estimated using different scenarios for the dynamics of product lifespan.
M. Kasemann
Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...
Directory of Open Access Journals (Sweden)
Michael Lanahan
2017-05-01
Full Text Available Buildings consume approximately ¾ of the total electricity generated in the United States, contributing significantly to fossil fuel emissions. Sustainable and renewable energy production can reduce fossil fuel use, but necessitates storage for energy reliability in order to compensate for the intermittency of renewable energy generation. Energy storage is critical for success in developing a sustainable energy grid because it facilitates higher renewable energy penetration by mitigating the gap between energy generation and demand. This review analyzes recent case studies—numerical and field experiments—seen by borehole thermal energy storage (BTES in space heating and domestic hot water capacities, coupled with solar thermal energy. System design, model development, and working principle(s are the primary focus of this analysis. A synopsis of the current efforts to effectively model BTES is presented as well. The literature review reveals that: (1 energy storage is most effective when diurnal and seasonal storage are used in conjunction; (2 no established link exists between BTES computational fluid dynamics (CFD models integrated with whole building energy analysis tools, rather than parameter-fit component models; (3 BTES has less geographical limitations than Aquifer Thermal Energy Storage (ATES and lower installation cost scale than hot water tanks and (4 BTES is more often used for heating than for cooling applications.
Costa Ferrer, Raquel; Serrano Rosa, Miguel Ángel; Zornoza Abad, Ana; Salvador Fernández-Montejo, Alicia
2010-11-01
The cardiovascular (CV) response to social challenge and stress is associated with the etiology of cardiovascular diseases. New ways of communication, time pressure and different types of information are common in our society. In this study, the cardiovascular response to two different tasks (open vs. closed information) was examined employing different communication channels (computer-mediated vs. face-to-face) and with different pace control (self vs. external). Our results indicate that there was a higher CV response in the computer-mediated condition, on the closed information task and in the externally paced condition. These role of these factors should be considered when studying the consequences of social stress and their underlying mechanisms.
Computational design of high efficiency release targets for use at ISOL facilities
Liu, Y
1999-01-01
This report describes efforts made at the Oak Ridge National Laboratory to design high-efficiency-release targets that simultaneously incorporate the short diffusion lengths, high permeabilities, controllable temperatures, and heat-removal properties required for the generation of useful radioactive ion beam (RIB) intensities for nuclear physics and astrophysics research using the isotope separation on-line (ISOL) technique. Short diffusion lengths are achieved either by using thin fibrous target materials or by coating thin layers of selected target material onto low-density carbon fibers such as reticulated-vitreous-carbon fiber (RVCF) or carbon-bonded-carbon fiber (CBCF) to form highly permeable composite target matrices. Computational studies that simulate the generation and removal of primary beam deposited heat from target materials have been conducted to optimize the design of target/heat-sink systems for generating RIBs. The results derived from diffusion release-rate simulation studies for selected t...
Computationally Efficient Robust Color Image Watermarking Using Fast Walsh Hadamard Transform
Directory of Open Access Journals (Sweden)
Suja Kalarikkal Pullayikodi
2017-10-01
Full Text Available Watermark is the copy deterrence mechanism used in the multimedia signal that is to be protected from hacking and piracy such a way that it can later be extracted from the watermarked signal by the decoder. Watermarking can be used in various applications such as authentication, video indexing, copyright protection and access control. In this paper a new CDMA (Code Division Multiple Access based robust watermarking algorithm using customized 8 × 8 Walsh Hadamard Transform, is proposed for the color images and detailed performance and robustness analysis have been performed. The paper studies in detail the effect of spreading code length, number of spreading codes and type of spreading codes on the performance of the watermarking system. Compared to the existing techniques the proposed scheme is computationally more efficient and consumes much less time for execution. Furthermore, the proposed scheme is robust and survives most of the common signal processing and geometric attacks.
Computer model of copper resistivity will improve the efficiency of field-compression devices
International Nuclear Information System (INIS)
Burgess, T.J.
1977-01-01
By detonating a ring of high explosive around an existing magnetic field, we can, under certain conditions, compress the field and multiply its strength tremendously. In this way, we can duplicate for a fraction of a second the extreme pressures that normally exist only in the interior of stars and planets. Under such pressures, materials may exhibit behavior that will confirm or alter current notions about the fundamental structure of matter and the ongoing processes in planetary interiors. However, we cannot design an efficient field-compression device unless we can calculate the electrical resistivity of certain basic metal components, which interact with the field. To aid in the design effort, we have developed a computer code that calculates the resistivity of copper and other metals over the wide range of temperatures and pressures found in a field-compression device
Quantification of ventilated facade efficiency by using computational fluid mechanics techniques
International Nuclear Information System (INIS)
Mora Perez, M.; Lopez Patino, G.; Bengochea Escribano, M. A.; Lopez Jimenez, P. A.
2011-01-01
In some countries, summer over-heating is a big problem in a buildings energy balance. Ventilated facades are a useful tool when applied to building design, especially in bio climatic building design. A ventilated facade is a complex, multi-layer structural solution that enables dry installation of the covering elements. The objective of this paper is to quantify the efficiency improvement in the building thermal when this sort of facade is installed. These improvements are due to convection produced in the air gap of the facade. This convection depends on the air movement inside the gap and the heat transmission in this motion. These quantities are mathematically modelled by Computational Fluid Dynamics (CFD) techniques using a commercial code: STAR CCM+. The proposed method allows an assessment of the energy potential of the ventilated facade and its capacity for cooling. (Author) 23 refs.
Energy Technology Data Exchange (ETDEWEB)
Pais Pitta de Lacerda Ruivo, Tiago [IIT, Chicago; Bernabeu Altayo, Gerard [Fermilab; Garzoglio, Gabriele [Fermilab; Timm, Steven [Fermilab; Kim, Hyun-Woo [Fermilab; Noh, Seo-Young [KISTI, Daejeon; Raicu, Ioan [IIT, Chicago
2014-11-11
has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56 virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
International Nuclear Information System (INIS)
Ma, Duancheng; Friák, Martin; Pezold, Johann von; Raabe, Dierk; Neugebauer, Jörg
2015-01-01
We propose an approach for the computationally efficient and quantitatively accurate prediction of solid-solution strengthening. It combines the 2-D Peierls–Nabarro model and a recently developed solid-solution strengthening model. Solid-solution strengthening is examined with Al–Mg and Al–Li as representative alloy systems, demonstrating a good agreement between theory and experiments within the temperature range in which the dislocation motion is overdamped. Through a parametric study, two guideline maps of the misfit parameters against (i) the critical resolved shear stress, τ 0 , at 0 K and (ii) the energy barrier, ΔE b , against dislocation motion in a solid solution with randomly distributed solute atoms are created. With these two guideline maps, τ 0 at finite temperatures is predicted for other Al binary systems, and compared with available experiments, achieving good agreement
Computational screening of new inorganic materials for highly efficient solar energy conversion
DEFF Research Database (Denmark)
Kuhar, Korina
2017-01-01
in solar cells convert solar energy into electricity, and PC uses harvested energy to conduct chemical reactions, such as splitting water into oxygen and, more importantly, hydrogen, also known as the fuel of the future. Further progress in both PV and PC fields is mostly limited by the flaws in materials...... materials. In this work a high-throughput computational search for suitable absorbers for PV and PC applications is presented. A set of descriptors has been developed, such that each descriptor targets an important property or issue of a good solar energy conversion material. The screening study...... that we have access to. Despite the vast amounts of energy at our disposal, we are not able to harvest this solar energy efficiently. Currently, there are a few ways of converting solar power into usable energy, such as photovoltaics (PV) or photoelectrochemical generation of fuels (PC). PV processes...
A new efficient algorithm for computing the imprecise reliability of monotone systems
International Nuclear Information System (INIS)
Utkin, Lev V.
2004-01-01
Reliability analysis of complex systems by partial information about reliability of components and by different conditions of independence of components may be carried out by means of the imprecise probability theory which provides a unified framework (natural extension, lower and upper previsions) for computing the system reliability. However, the application of imprecise probabilities to reliability analysis meets with a complexity of optimization problems which have to be solved for obtaining the system reliability measures. Therefore, an efficient simplified algorithm to solve and decompose the optimization problems is proposed in the paper. This algorithm allows us to practically implement reliability analysis of monotone systems under partial and heterogeneous information about reliability of components and under conditions of the component independence or the lack of information about independence. A numerical example illustrates the algorithm
A structural approach to constructing perspective efficient and reliable human-computer interfaces
International Nuclear Information System (INIS)
Balint, L.
1989-01-01
The principles of human-computer interface (HCI) realizations are investigated with the aim of getting closer to a general framework and thus, to a more or less solid background of constructing perspective efficient, reliable and cost-effective human-computer interfaces. On the basis of characterizing and classifying the different HCI solutions, the fundamental problems of interface construction are pointed out especially with respect to human error occurrence possibilities. The evolution of HCI realizations is illustrated by summarizing the main properties of past, present and foreseeable future interface generations. HCI modeling is pointed out to be a crucial problem in theoretical and practical investigations. Suggestions concerning HCI structure (hierarchy and modularity), HCI functional dynamics (mapping from input to output information), minimization of human error caused system failures (error-tolerance, error-recovery and error-correcting) as well as cost-effective HCI design and realization methodology (universal and application-oriented vs. application-specific solutions) are presented. The concept of RISC-based and SCAMP-type HCI components is introduced with the aim of having a reduced interaction scheme in communication and a well defined architecture in HCI components' internal structure. HCI efficiency and reliability are dealt with, by taking into account complexity and flexibility. The application of fast computerized prototyping is also briefly investigated as an experimental device of achieving simple, parametrized, invariant HCI models. Finally, a concise outline of an approach of how to construct ideal HCI's is also suggested by emphasizing the open questions and the need of future work related to the proposals, as well. (author). 14 refs, 6 figs
Directory of Open Access Journals (Sweden)
Gabriel Oltean
Full Text Available The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms, efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer, and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination. The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.
Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G
2016-06-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.
International Nuclear Information System (INIS)
Goto, Masahiro; Uezu, Kazuya; Aoshima, Atsushi; Koma, Yoshikazu
2002-05-01
In this study, efficient separation materials have been created by the computational approach. Based on the computational calculation, novel organophosphorus extractants, which have two functional moieties in the molecular structure, were developed for the recycle system of transuranium elements using liquid-liquid extraction. Furthermore, molecularly imprinted resins were prepared by the surface-imprint polymerization technique. Thorough this research project, we obtained two principal results: 1) design of novel extractants by computational approach, and 2) preparation of highly selective resins by the molecular imprinting technique. The synthesized extractants showed extremely high extractability to rare earth metals compared to those of commercially available extractants. The results of extraction equilibrium suggested that the structural effect of extractants is one of the key factors to enhance the selectivity and extractability in rare earth extractions. Furthermore, a computational analysis was carried out to evaluate the extraction properties for the extraction of rare earth metals by the synthesized extractants. The computer simulation was shown to be very useful for designing new extractants. The new concept to connect some functional moieties with a spacer is very useful and is a promising method to develop novel extractants for the treatment of nuclear fuel. In the second part, we proposed a novel molecular imprinting technique (surface template polymerization) for the separation of lanthanides and actinides. A surface-templated resin is prepared by an emulsion polymerization using an ion-binding (host) monomer, a resin matrix-forming monomer and the target Nd(III) metal ion. A host monomer which has amphiphilic nature forms a complex with a metal ion at the interface, and the complex remains as it is. After the matrix is polymerized, the coordination structure is 'imprinted' at the resin interface. Adsorption of Nd(III) and La(III) ions onto the
International Nuclear Information System (INIS)
Sahni, D.C.; Sharma, A.
2000-01-01
The integral form of one-speed, spherically symmetric neutron transport equation with isotropic scattering is considered. Two standard problems are solved using normal mode expansion technique. The expansion coefficients are obtained by solving their singular integral equations. It is shown that these expansion coefficients provide a representation of all spherical harmonics moments of the angular flux as a superposition of Bessel functions. It is seen that large errors occur in the computation of higher moments unless we take certain precautions. The reasons for this phenomenon are explained. They throw some light on the failure of spherical harmonics method in treating spherical geometry problems as observed by Aronsson
Improving the Eco-Efficiency of High Performance Computing Clusters Using EECluster
Directory of Open Access Journals (Sweden)
Alberto Cocaña-Fernández
2016-03-01
Full Text Available As data and supercomputing centres increase their performance to improve service quality and target more ambitious challenges every day, their carbon footprint also continues to grow, and has already reached the magnitude of the aviation industry. Also, high power consumptions are building up to a remarkable bottleneck for the expansion of these infrastructures in economic terms due to the unavailability of sufficient energy sources. A substantial part of the problem is caused by current energy consumptions of High Performance Computing (HPC clusters. To alleviate this situation, we present in this work EECluster, a tool that integrates with multiple open-source Resource Management Systems to significantly reduce the carbon footprint of clusters by improving their energy efficiency. EECluster implements a dynamic power management mechanism based on Computational Intelligence techniques by learning a set of rules through multi-criteria evolutionary algorithms. This approach enables cluster operators to find the optimal balance between a reduction in the cluster energy consumptions, service quality, and number of reconfigurations. Experimental studies using both synthetic and actual workloads from a real world cluster support the adoption of this tool to reduce the carbon footprint of HPC clusters.
International Nuclear Information System (INIS)
Han Liangxiu
2009-01-01
Grid computing aims to enable 'resource sharing and coordinated problem solving in dynamic, multi-institutional virtual organizations (VOs)'. However, due to the nature of heterogeneous and dynamic resources, dynamic failures in the distributed grid environment usually occur more than in traditional computation platforms, which cause failed VO formations. In this paper, we develop a novel self-adaptive mechanism to dynamic failures during VO formations. Such a self-adaptive scheme allows an individual and member of VOs to automatically find other available or replaceable one once a failure happens and therefore makes systems automatically recover from dynamic failures. We define dynamic failure situations of a system by using two standard indicators: mean time between failures (MTBF) and mean time to recover (MTTR). We model both MTBF and MTTR as Poisson distributions. We investigate and analyze the efficiency of the proposed self-adaptation mechanism to dynamic failures by comparing the success probability of VO formations before and after adopting it in three different cases: (1) different failure situations; (2) different organizational structures and scales; (3) different task complexities. The experimental results show that the proposed scheme can automatically adapt to dynamic failures and effectively improve the dynamic VO formation performance in the event of node failures, which provide a valuable addition to the field.
Toward efficient computation of the expected relative entropy for nonlinear experimental design
International Nuclear Information System (INIS)
Coles, Darrell; Prange, Michael
2012-01-01
The expected relative entropy between prior and posterior model-parameter distributions is a Bayesian objective function in experimental design theory that quantifies the expected gain in information of an experiment relative to a previous state of knowledge. The expected relative entropy is a preferred measure of experimental quality because it can handle nonlinear data-model relationships, an important fact due to the ubiquity of nonlinearity in science and engineering and its effects on post-inversion parameter uncertainty. This objective function does not necessarily yield experiments that mediate well-determined systems, but, being a Bayesian quality measure, it rigorously accounts for prior information which constrains model parameters that may be only weakly constrained by the optimized dataset. Historically, use of the expected relative entropy has been limited by the computing and storage requirements associated with high-dimensional numerical integration. Herein, a bifocal algorithm is developed that makes these computations more efficient. The algorithm is demonstrated on a medium-sized problem of sampling relaxation phenomena and on a large problem of source–receiver selection for a 2D vertical seismic profile. The method is memory intensive but workarounds are discussed. (paper)
Computationally Efficient 2D DOA Estimation for L-Shaped Array with Unknown Mutual Coupling
Directory of Open Access Journals (Sweden)
Yang-Yang Dong
2018-01-01
Full Text Available Although L-shaped array can provide good angle estimation performance and is easy to implement, its two-dimensional (2D direction-of-arrival (DOA performance degrades greatly in the presence of mutual coupling. To deal with the mutual coupling effect, a novel 2D DOA estimation method for L-shaped array with low computational complexity is developed in this paper. First, we generalize the conventional mutual coupling model for L-shaped array and compensate the mutual coupling blindly via sacrificing a few sensors as auxiliary elements. Then we apply the propagator method twice to mitigate the effect of strong source signal correlation effect. Finally, the estimations of azimuth and elevation angles are achieved simultaneously without pair matching via the complex eigenvalue technique. Compared with the existing methods, the proposed method is computationally efficient without spectrum search or polynomial rooting and also has fine angle estimation performance for highly correlated source signals. Theoretical analysis and simulation results have demonstrated the effectiveness of the proposed method.
Efficient and Flexible Climate Analysis with Python in a Cloud-Based Distributed Computing Framework
Gannon, C.
2017-12-01
As climate models become progressively more advanced, and spatial resolution further improved through various downscaling projects, climate projections at a local level are increasingly insightful and valuable. However, the raw size of climate datasets presents numerous hurdles for analysts wishing to develop customized climate risk metrics or perform site-specific statistical analysis. Four Twenty Seven, a climate risk consultancy, has implemented a Python-based distributed framework to analyze large climate datasets in the cloud. With the freedom afforded by efficiently processing these datasets, we are able to customize and continually develop new climate risk metrics using the most up-to-date data. Here we outline our process for using Python packages such as XArray and Dask to evaluate netCDF files in a distributed framework, StarCluster to operate in a cluster-computing environment, cloud computing services to access publicly hosted datasets, and how this setup is particularly valuable for generating climate change indicators and performing localized statistical analysis.
Efficient computation of the inverse of gametic relationship matrix for a marked QTL
Directory of Open Access Journals (Sweden)
Iwaisaki Hiroaki
2006-04-01
Full Text Available Abstract Best linear unbiased prediction of genetic merits for a marked quantitative trait locus (QTL using mixed model methodology includes the inverse of conditional gametic relationship matrix (G-1 for a marked QTL. When accounting for inbreeding, the conditional gametic relationships between two parents of individuals for a marked QTL are necessary to build G-1 directly. Up to now, the tabular method and its adaptations have been used to compute these relationships. In the present paper, an indirect method was implemented at the gametic level to compute these few relationships. Simulation results showed that the indirect method can perform faster with significantly less storage requirements than adaptation of the tabular method. The efficiency of the indirect method was mainly due to the use of the sparseness of G-1. The indirect method can also be applied to construct an approximate G-1 for populations with incomplete marker data, providing approximate probabilities of descent for QTL alleles for individuals with incomplete marker data.
Energy-Efficient FPGA-Based Parallel Quasi-Stochastic Computing
Directory of Open Access Journals (Sweden)
Ramu Seva
2017-11-01
Full Text Available The high performance of FPGA (Field Programmable Gate Array in image processing applications is justified by its flexible reconfigurability, its inherent parallel nature and the availability of a large amount of internal memories. Lately, the Stochastic Computing (SC paradigm has been found to be significantly advantageous in certain application domains including image processing because of its lower hardware complexity and power consumption. However, its viability is deemed to be limited due to its serial bitstream processing and excessive run-time requirement for convergence. To address these issues, a novel approach is proposed in this work where an energy-efficient implementation of SC is accomplished by introducing fast-converging Quasi-Stochastic Number Generators (QSNGs and parallel stochastic bitstream processing, which are well suited to leverage FPGA’s reconfigurability and abundant internal memory resources. The proposed approach has been tested on the Virtex-4 FPGA, and results have been compared with the serial and parallel implementations of conventional stochastic computation using the well-known SC edge detection and multiplication circuits. Results prove that by using this approach, execution time, as well as the power consumption are decreased by a factor of 3.5 and 4.5 for the edge detection circuit and multiplication circuit, respectively.
Computer Controlled Portable Greenhouse Climate Control System for Enhanced Energy Efficiency
Datsenko, Anthony; Myer, Steve; Petties, Albert; Hustek, Ryan; Thompson, Mark
2010-04-01
This paper discusses a student project at Kettering University focusing on the design and construction of an energy efficient greenhouse climate control system. In order to maintain acceptable temperatures and stabilize temperature fluctuations in a portable plastic greenhouse economically, a computer controlled climate control system was developed to capture and store thermal energy incident on the structure during daylight periods and release the stored thermal energy during dark periods. The thermal storage mass for the greenhouse system consisted of a water filled base unit. The heat exchanger consisted of a system of PVC tubing. The control system used a programmable LabView computer interface to meet functional specifications that minimized temperature fluctuations and recorded data during operation. The greenhouse was a portable sized unit with a 5' x 5' footprint. Control input sensors were temperature, water level, and humidity sensors and output control devices were fan actuating relays and water fill solenoid valves. A Graphical User Interface was developed to monitor the system, set control parameters, and to provide programmable data recording times and intervals.
Energy Technology Data Exchange (ETDEWEB)
Dixon, D.A., E-mail: ddixon@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: prinja@unm.edu [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: bcfrank@sandia.gov [Sandia National Laboratories, Albuquerque, NM 87123 (United States)
2015-09-15
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
Adams, M.; Kempka, T.; Chabab, E.; Ziegler, M.
2018-02-01
Estimating the efficiency and sustainability of geological subsurface utilization, i.e., Carbon Capture and Storage (CCS) requires an integrated risk assessment approach, considering the occurring coupled processes, beside others, the potential reactivation of existing faults. In this context, hydraulic and mechanical parameter uncertainties as well as different injection rates have to be considered and quantified to elaborate reliable environmental impact assessments. Consequently, the required sensitivity analyses consume significant computational time due to the high number of realizations that have to be carried out. Due to the high computational costs of two-way coupled simulations in large-scale 3D multiphase fluid flow systems, these are not applicable for the purpose of uncertainty and risk assessments. Hence, an innovative semi-analytical hydromechanical coupling approach for hydraulic fault reactivation will be introduced. This approach determines the void ratio evolution in representative fault elements using one preliminary base simulation, considering one model geometry and one set of hydromechanical parameters. The void ratio development is then approximated and related to one reference pressure at the base of the fault. The parametrization of the resulting functions is then directly implemented into a multiphase fluid flow simulator to carry out the semi-analytical coupling for the simulation of hydromechanical processes. Hereby, the iterative parameter exchange between the multiphase and mechanical simulators is omitted, since the update of porosity and permeability is controlled by one reference pore pressure at the fault base. The suggested procedure is capable to reduce the computational time required by coupled hydromechanical simulations of a multitude of injection rates by a factor of up to 15.
Highly efficient computer algorithm for identifying layer thickness of atomically thin 2D materials
Lee, Jekwan; Cho, Seungwan; Park, Soohyun; Bae, Hyemin; Noh, Minji; Kim, Beom; In, Chihun; Yang, Seunghoon; Lee, Sooun; Seo, Seung Young; Kim, Jehyun; Lee, Chul-Ho; Shim, Woo-Young; Jo, Moon-Ho; Kim, Dohun; Choi, Hyunyong
2018-03-01
The fields of layered material research, such as transition-metal dichalcogenides (TMDs), have demonstrated that the optical, electrical and mechanical properties strongly depend on the layer number N. Thus, efficient and accurate determination of N is the most crucial step before the associated device fabrication. An existing experimental technique using an optical microscope is the most widely used one to identify N. However, a critical drawback of this approach is that it relies on extensive laboratory experiences to estimate N; it requires a very time-consuming image-searching task assisted by human eyes and secondary measurements such as atomic force microscopy and Raman spectroscopy, which are necessary to ensure N. In this work, we introduce a computer algorithm based on the image analysis of a quantized optical contrast. We show that our algorithm can apply to a wide variety of layered materials, including graphene, MoS2, and WS2 regardless of substrates. The algorithm largely consists of two parts. First, it sets up an appropriate boundary between target flakes and substrate. Second, to compute N, it automatically calculates the optical contrast using an adaptive RGB estimation process between each target, which results in a matrix with different integer Ns and returns a matrix map of Ns onto the target flake position. Using a conventional desktop computational power, the time taken to display the final N matrix was 1.8 s on average for the image size of 1280 pixels by 960 pixels and obtained a high accuracy of 90% (six estimation errors among 62 samples) when compared to the other methods. To show the effectiveness of our algorithm, we also apply it to TMD flakes transferred on optically transparent c-axis sapphire substrates and obtain a similar result of the accuracy of 94% (two estimation errors among 34 samples).
Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity
Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott
2008-01-01
The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic
Efficient approach to compute melting properties fully from ab initio with application to Cu
Zhu, Li-Fang; Grabowski, Blazej; Neugebauer, Jörg
2017-12-01
Applying thermodynamic integration within an ab initio-based free-energy approach is a state-of-the-art method to calculate melting points of materials. However, the high computational cost and the reliance on a good reference system for calculating the liquid free energy have so far hindered a general application. To overcome these challenges, we propose the two-optimized references thermodynamic integration using Langevin dynamics (TOR-TILD) method in this work by extending the two-stage upsampled thermodynamic integration using Langevin dynamics (TU-TILD) method, which has been originally developed to obtain anharmonic free energies of solids, to the calculation of liquid free energies. The core idea of TOR-TILD is to fit two empirical potentials to the energies from density functional theory based molecular dynamics runs for the solid and the liquid phase and to use these potentials as reference systems for thermodynamic integration. Because the empirical potentials closely reproduce the ab initio system in the relevant part of the phase space the convergence of the thermodynamic integration is very rapid. Therefore, the proposed approach improves significantly the computational efficiency while preserving the required accuracy. As a test case, we apply TOR-TILD to fcc Cu computing not only the melting point but various other melting properties, such as the entropy and enthalpy of fusion and the volume change upon melting. The generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional and the local-density approximation (LDA) are used. Using both functionals gives a reliable ab initio confidence interval for the melting point, the enthalpy of fusion, and entropy of fusion.
Energy Technology Data Exchange (ETDEWEB)
Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)
2017-05-01
Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address
International Nuclear Information System (INIS)
Yeo, Seung Gu; Kim, Eun Seog
2013-01-01
This study aimed to investigate efficient approaches for determining internal target volume (ITV) from four-dimensional computed tomography (4D CT) images used in stereotactic body radiotherapy (SBRT) for patients with early-stage non-small cell lung cancer (NSCLC). 4D CT images were analyzed for 15 patients who received SBRT for stage I NSCLC. Three different ITVs were determined as follows: combining clinical target volume (CTV) from all 10 respiratory phases (ITV 10Phases ); combining CTV from four respiratory phases, including two extreme phases (0% and 50%) plus two intermediate phases (20% and 70%) (ITV 4Phases ); and combining CTV from two extreme phases (ITV 2Phases ). The matching index (MI) of ITV 4Phases and ITV 2Phases was defined as the ratio of ITV 4Phases and ITV 2Phases , respectively, to the ITV 10Phases . The tumor motion index (TMI) was defined as the ratio of ITV 10Phases to CTV mean , which was the mean of 10 CTVs delineated on 10 respiratory phases. The ITVs were significantly different in the order of ITV 10Phases , ITV 4Phases , and ITV 2Phases (all p 4Phases was significantly higher than that of ITV 2Phases (p 4Phases was inversely related to TMI (r = -0.569, p = 0.034). In a subgroup with low TMI (n = 7), ITV 4Phases was not statistically different from ITV 10Phases (p = 0.192) and its MI was significantly higher than that of ITV 2Phases (p = 0.016). The ITV 4Phases may be an efficient approach alternative to optimal ITV 10Phases in SBRT for early-stage NSCLC with less tumor motion.
Biswas, Abul Kalam; Barik, Sunirmal; Das, Amitava; Ganguly, Bishwajit
2016-06-01
We have reported a number of new metal-free organic dyes (2-6) that have cyclic asymmetric benzotripyrrole derivatives as donor groups with peripheral nitrogen atoms in the ring, fluorine and thiophene groups as π-spacers, and a cyanoacrylic acid acceptor group. Density functional theory (DFT) and time-dependent DFT (TD-DFT) calculations were employed to examine the influence of the position of the donor nitrogen atom and π-conjugation on solar cell performance. The calculated electron-injection driving force (ΔG inject), electron-regeneration driving force (ΔG regen), light-harvesting efficiency (LHE), dipole moment (μ normal), and number of electrons transferred (∆q) indicate that dyes 3, 4, and 6 have significantly higher efficiencies than reference dye 1, which exhibits high efficiency. We also extended our comparison to some other reported dyes, 7-9, which have a donor nitrogen atom in the middle of the ring system. The computed results suggest that dye 6 possesses a higher incident photon to current conversion efficiency (IPCE) than reported dyes 7-9. Thus, the use of donor groups with peripheral nitrogen atoms appears to lead to more efficient dyes than those in which the nitrogen atom is present in the middle of the donor ring system. Graphical Abstract The locations of the nitrogen atoms in the donor groups in the designed dye molecules have an important influence on DSSC efficiency.
Goudey, Benjamin; Abedini, Mani; Hopper, John L; Inouye, Michael; Makalic, Enes; Schmidt, Daniel F; Wagner, John; Zhou, Zeyu; Zobel, Justin; Reumann, Matthias
2015-01-01
Genome-wide association studies (GWAS) are a common approach for systematic discovery of single nucleotide polymorphisms (SNPs) which are associated with a given disease. Univariate analysis approaches commonly employed may miss important SNP associations that only appear through multivariate analysis in complex diseases. However, multivariate SNP analysis is currently limited by its inherent computational complexity. In this work, we present a computational framework that harnesses supercomputers. Based on our results, we estimate a three-way interaction analysis on 1.1 million SNP GWAS data requiring over 5.8 years on the full "Avoca" IBM Blue Gene/Q installation at the Victorian Life Sciences Computation Initiative. This is hundreds of times faster than estimates for other CPU based methods and four times faster than runtimes estimated for GPU methods, indicating how the improvement in the level of hardware applied to interaction analysis may alter the types of analysis that can be performed. Furthermore, the same analysis would take under 3 months on the currently largest IBM Blue Gene/Q supercomputer "Sequoia" at the Lawrence Livermore National Laboratory assuming linear scaling is maintained as our results suggest. Given that the implementation used in this study can be further optimised, this runtime means it is becoming feasible to carry out exhaustive analysis of higher order interaction studies on large modern GWAS.
International Nuclear Information System (INIS)
Normann, Goeran
2002-01-01
The accrual of energy taxation has led to a complex structure of taxes and charges that are characterized by instability and low efficiency. Other reasons for analyzing the system is the pressure from our contractual responsibilities within the European Union and the raised ambitions in the environmental policy. The report leads to the conclusion that it would be motivated to separate fiscal energy taxation from measures to internalize environmental costs that the market does not register. This separation would make it possible to create a more transparent and rational energy taxation. The fiscal energy taxation ought to be a broad, value-based tax, equal for all energy sources. Value-based means, besides the energy content in kWh, also properties such as conversion and distribution costs. Two alternatives are suggested for the fiscal energy taxation: A separate consumption tax on energy. Such a tax would amount to 48% to produce the same income as the fiscal elements of today's energy taxes. Another alternative would be to include the fiscal energy tax in the value added tax. This would raise the standard VAT level to 30%, if the lower VAT levels are kept unchanged. With this model, consumption of energy would be treated as any other consumption. The environmental policy measures against greenhouse gases should be delt with through a system with international trade with emission quotas for such gases. Measures against other external effects from energy use are not suggested in this report, except for the opinion that economic incentives are preferable to regulations. The initial allocation of quotas ought to be done through an auction, since this method would give lower national costs than the alternatives. The system should cover all greenhouse gases and (almost) all sources which indicates that an upstream solution would be best with low administrative costs. A safety vent should be considered, so that extreme costs for CO 2 -emissions are avoided, if e.g. the
Development of a low-cost biogas filtration system to achieve higher-power efficient AC generator
Mojica, Edison E.; Ardaniel, Ar-Ar S.; Leguid, Jeanlou G.; Loyola, Andrea T.
2018-02-01
The paper focuses on the development of a low-cost biogas filtration system for alternating current generator to achieve higher efficiency in terms of power production. A raw biogas energy comprises of 57% combustible element and 43% non-combustible elements containing carbon dioxide (36%), water vapor (5%), hydrogen sulfide (0.5%), nitrogen (1%), oxygen (0 - 2%), and ammonia (0 - 1%). The filtration system composes of six stages: stage 1 is the water scrubber filter intended to remove the carbon dioxide and traces of hydrogen sulfide; stage 2 is the silica gel filter intended to reduce the water vapor; stage 3 is the iron sponge filter intended to remove the remaining hydrogen sulfide; stage 4 is the sodium hydroxide solution filter intended to remove the elemental sulfur formed during the interaction of the hydrogen sulfide and the iron sponge and for further removal of carbon dioxide; stage 5 is the silica gel filter intended to further eliminate the water vapor gained in stage 4; and, stage 6 is the activated carbon filter intended to remove the carbon dioxide. The filtration system was able to lower the non-combustible elements by 72% and thus, increasing the combustible element by 54.38%. The unfiltered biogas is capable of generating 16.3 kW while the filtered biogas is capable of generating 18.6 kW. The increased in methane concentration resulted to 14.11% increase in the power output. The outcome resulted to better engine performance in the generation of electricity.
Directory of Open Access Journals (Sweden)
JONG WOON KIM
2014-04-01
In this paper, we introduce a modified scattering kernel approach to avoid the unnecessarily repeated calculations involved with the scattering source calculation, and used it with parallel computing to effectively reduce the computation time. Its computational efficiency was tested for three-dimensional full-coupled photon-electron transport problems using our computer program which solves the multi-group discrete ordinates transport equation by using the discontinuous finite element method with unstructured tetrahedral meshes for complicated geometrical problems. The numerical tests show that we can improve speed up to 17∼42 times for the elapsed time per iteration using the modified scattering kernel, not only in the single CPU calculation but also in the parallel computing with several CPUs.
International Nuclear Information System (INIS)
Azmy, Y.Y.; Kirk, B.L.
1990-01-01
Modern parallel computer architectures offer an enormous potential for reducing CPU and wall-clock execution times of large-scale computations commonly performed in various applications in science and engineering. Recently, several authors have reported their efforts in developing and implementing parallel algorithms for solving the neutron diffusion equation on a variety of shared- and distributed-memory parallel computers. Testing of these algorithms for a variety of two- and three-dimensional meshes showed significant speedup of the computation. Even for very large problems (i.e., three-dimensional fine meshes) executed concurrently on a few nodes in serial (nonvector) mode, however, the measured computational efficiency is very low (40 to 86%). In this paper, the authors present a highly efficient (∼85 to 99.9%) algorithm for solving the two-dimensional nodal diffusion equations on the Sequent Balance 8000 parallel computer. Also presented is a model for the performance, represented by the efficiency, as a function of problem size and the number of participating processors. The model is validated through several tests and then extrapolated to larger problems and more processors to predict the performance of the algorithm in more computationally demanding situations
Increasing efficiency of job execution with resource co-allocation in distributed computer systems
Cankar, Matija
2014-01-01
The field of distributed computer systems, while not new in computer science, is still the subject of a lot of interest in both industry and academia. More powerful computers, faster and more ubiquitous networks, and complex distributed applications are accelerating the growth of distributed computing. Large numbers of computers interconnected in a single network provide additional computing power to users whenever required. Such systems are, however, expensive and complex to manage, which ca...
I. Fisk
2013-01-01
Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...
Energy Technology Data Exchange (ETDEWEB)
Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran; Crawford, Nathan C.; Fischer, Paul F.
2017-04-11
Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters. We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.
A hybrid model for the computationally-efficient simulation of the cerebellar granular layer
Directory of Open Access Journals (Sweden)
Anna eCattani
2016-04-01
Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.
Directory of Open Access Journals (Sweden)
Danielle S Bassett
2010-04-01
Full Text Available Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.
Quariguasi Frota Neto, João; Bloemhof-Ruwaard, Jacqueline
2009-01-01
textabstractRemanufacturing has long been perceived as an environmentally-friendly initiative, and it is therefore sup- ported by a number of governments, in particular in Europe. Yet, the assumption that remanufacturing is desirable to society has never been systematically investigated. In this paper, we focus our attention on the electronics industry. In particular, we take a close look at remanufacturing within the personal computer and mobile phone industries. We investigate whether reman...
International Nuclear Information System (INIS)
Klein, K.M.; Park, C.; Yang, S.; Morris, S.; Do, V.; Tasch, F.
1992-01-01
We have developed a new computationally-efficient two-dimensional model for boron implantation into single-crystal silicon. This paper reports that this new model is based on the dual Pearson semi-empirical implant depth profile model and the UT-MARLOWE Monte Carlo boron ion implantation model. This new model can predict with very high computational efficiency two-dimensional as-implanted boron profiles as a function of energy, dose, tilt angle, rotation angle, masking edge orientation, and masking edge thickness
Wan, Shixiang; Zou, Quan
2017-01-01
Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.
Energy-Efficient Caching for Mobile Edge Computing in 5G Networks
Directory of Open Access Journals (Sweden)
Zhaohui Luo
2017-05-01
Full Text Available Mobile Edge Computing (MEC, which is considered a promising and emerging paradigm to provide caching capabilities in proximity to mobile devices in 5G networks, enables fast, popular content delivery of delay-sensitive applications at the backhaul capacity of limited mobile networks. Most existing studies focus on cache allocation, mechanism design and coding design for caching. However, grid power supply with fixed power uninterruptedly in support of a MEC server (MECS is costly and even infeasible, especially when the load changes dynamically over time. In this paper, we investigate the energy consumption of the MECS problem in cellular networks. Given the average download latency constraints, we take the MECS’s energy consumption, backhaul capacities and content popularity distributions into account and formulate a joint optimization framework to minimize the energy consumption of the system. As a complicated joint optimization problem, we apply a genetic algorithm to solve it. Simulation results show that the proposed solution can effectively determine the near-optimal caching placement to obtain better performance in terms of energy efficiency gains compared with conventional caching placement strategies. In particular, it is shown that the proposed scheme can significantly reduce the joint cost when backhaul capacity is low.
An efficient approach for improving virtual machine placement in cloud computing environment
Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.
2017-11-01
The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.
Efficiency Evaluation of Handling of Geologic-Geophysical Information by Means of Computer Systems
Nuriyahmetova, S. M.; Demyanova, O. V.; Zabirova, L. M.; Gataullin, I. I.; Fathutdinova, O. A.; Kaptelinina, E. A.
2018-05-01
Development of oil and gas resources, considering difficult geological, geographical and economic conditions, requires considerable finance costs; therefore their careful reasons, application of the most perspective directions and modern technologies from the point of view of cost efficiency of planned activities are necessary. For ensuring high precision of regional and local forecasts and modeling of reservoirs of fields of hydrocarbonic raw materials, it is necessary to analyze huge arrays of the distributed information which is constantly changing spatial. The solution of this task requires application of modern remote methods of a research of the perspective oil-and-gas territories, complex use of materials remote, nondestructive the environment of geologic-geophysical and space methods of sounding of Earth and the most perfect technologies of their handling. In the article, the authors considered experience of handling of geologic-geophysical information by means of computer systems by the Russian and foreign companies. Conclusions that the multidimensional analysis of geologicgeophysical information space, effective planning and monitoring of exploration works requires broad use of geoinformation technologies as one of the most perspective directions in achievement of high profitability of an oil and gas industry are drawn.
Liu, Chen-Guang; Li, Zhi-Yang; Hao, Yue; Xia, Juan; Bai, Feng-Wu; Mehmood, Muhammad Aamer
2018-05-01
Flocculation plays an important role in the immobilized fermentation of biofuels and biochemicals. It is essential to understand the flocculation phenomenon at physical and molecular scale; however, flocs cannot be studied directly due to fragile nature. Hence, the present study is focused on the morphological specificities of yeast flocs formation and sedimentation via the computer simulation by a single floc growth model, based on Diffusion-Limited Aggregation (DLA) model. The impact of shear force, adsorption, and cell propagation on porosity and floc size is systematically illustrated. Strong shear force and weak adsorption reduced floc size but have little impact on porosity. Besides, cell propagation concreted the compactness of flocs enabling them to gain a larger size. Later, a multiple flocs growth model is developed to explain sedimentation at various initial floc sizes. Both models exhibited qualitative agreements with available experimental data. By regulating the operation constraints during fermentation, the present study will lead to finding optimal conditions to control the floc size distribution for efficient fermentation and harvesting. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Wafaa S. Sayed
2017-01-01
Full Text Available Chaotic systems appear in many applications such as pseudo-random number generation, text encryption, and secure image transfer. Numerical solutions of these systems using digital software or hardware inevitably deviate from the expected analytical solutions. Chaotic orbits produced using finite precision systems do not exhibit the infinite period expected under the assumptions of infinite simulation time and precision. In this paper, digital implementation of the generalized logistic map with signed parameter is considered. We present a fixed-point hardware realization of a Pseudo-Random Number Generator using the logistic map that experiences a trade-off between computational efficiency and accuracy. Several introduced factors such as the used precision, the order of execution of the operations, parameter, and initial point values affect the properties of the finite precision map. For positive and negative parameter cases, the studied properties include bifurcation points, output range, maximum Lyapunov exponent, and period length. The performance of the finite precision logistic map is compared in the two cases. A basic stream cipher system is realized to evaluate the system performance for encryption applications for different bus sizes regarding the encryption key size, hardware requirements, maximum clock frequency, NIST and correlation, histogram, entropy, and Mean Absolute Error analyses of encrypted images.
International Nuclear Information System (INIS)
Arnold, Alexander; Bruhns, Otto T; Reichling, Stefan; Mosler, Joern
2010-01-01
This paper is concerned with an efficient implementation suitable for the elastography inverse problem. More precisely, the novel algorithm allows us to compute the unknown stiffness distribution in soft tissue by means of the measured displacement field by considerably reducing the numerical cost compared to previous approaches. This is realized by combining and further elaborating variational mesh adaption with a clustering technique similar to those known from digital image compression. Within the variational mesh adaption, the underlying finite element discretization is only locally refined if this leads to a considerable improvement of the numerical solution. Additionally, the numerical complexity is reduced by the aforementioned clustering technique, in which the parameters describing the stiffness of the respective soft tissue are sorted according to a predefined number of intervals. By doing so, the number of unknowns associated with the elastography inverse problem can be chosen explicitly. A positive side effect of this method is the reduction of artificial noise in the data (smoothing of the solution). The performance and the rate of convergence of the resulting numerical formulation are critically analyzed by numerical examples.
BINGO: a code for the efficient computation of the scalar bi-spectrum
Hazra, Dhiraj Kumar; Sriramkumar, L.; Martin, Jérôme
2013-05-01
We present a new and accurate Fortran code, the BI-spectra and Non-Gaussianity Operator (BINGO), for the efficient numerical computation of the scalar bi-spectrum and the non-Gaussianity parameter fNL in single field inflationary models involving the canonical scalar field. The code can calculate all the different contributions to the bi-spectrum and the parameter fNL for an arbitrary triangular configuration of the wavevectors. Focusing firstly on the equilateral limit, we illustrate the accuracy of BINGO by comparing the results from the code with the spectral dependence of the bi-spectrum expected in power law inflation. Then, considering an arbitrary triangular configuration, we contrast the numerical results with the analytical expression available in the slow roll limit, for, say, the case of the conventional quadratic potential. Considering a non-trivial scenario involving deviations from slow roll, we compare the results from the code with the analytical results that have recently been obtained in the case of the Starobinsky model in the equilateral limit. As an immediate application, we utilize BINGO to examine of the power of the non-Gaussianity parameter fNL to discriminate between various inflationary models that admit departures from slow roll and lead to similar features in the scalar power spectrum. We close with a summary and discussion on the implications of the results we obtain.
Directory of Open Access Journals (Sweden)
Eric Chalmers
2016-12-01
Full Text Available The mammalian brain is thought to use a version of Model-based Reinforcement Learning (MBRL to guide goal-directed behavior, wherein animals consider goals and make plans to acquire desired outcomes. However, conventional MBRL algorithms do not fully explain animals’ ability to rapidly adapt to environmental changes, or learn multiple complex tasks. They also require extensive computation, suggesting that goal-directed behavior is cognitively expensive. We propose here that key features of processing in the hippocampus support a flexible MBRL mechanism for spatial navigation that is computationally efficient and can adapt quickly to change. We investigate this idea by implementing a computational MBRL framework that incorporates features inspired by computational properties of the hippocampus: a hierarchical representation of space, forward sweeps through future spatial trajectories, and context-driven remapping of place cells. We find that a hierarchical abstraction of space greatly reduces the computational load (mental effort required for adaptation to changing environmental conditions, and allows efficient scaling to large problems. It also allows abstract knowledge gained at high levels to guide adaptation to new obstacles. Moreover, a context-driven remapping mechanism allows learning and memory of multiple tasks. Simulating dorsal or ventral hippocampal lesions in our computational framework qualitatively reproduces behavioral deficits observed in rodents with analogous lesions. The framework may thus embody key features of how the brain organizes model-based RL to efficiently solve navigation and other difficult tasks.
Yilmaz, Ferkan
2012-12-01
The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE.
Yilmaz, Ferkan; Alouini, Mohamed-Slim
2012-01-01
The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE.
Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus
2016-04-01
Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type
DEFF Research Database (Denmark)
Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik
2016-01-01
This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms....... This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...
International Nuclear Information System (INIS)
Yuan, Minghu; Feng, Liqiang; Lü, Rui; Chu, Tianshu
2014-01-01
We show that by introducing Wigner rotation technique into the solution of time-dependent Schrödinger equation in length gauge, computational efficiency can be greatly improved in describing atoms in intense few-cycle circularly polarized laser pulses. The methodology with Wigner rotation technique underlying our openMP parallel computational code for circularly polarized laser pulses is described. Results of test calculations to investigate the scaling property of the computational code with the number of the electronic angular basis function l as well as the strong field phenomena are presented and discussed for the hydrogen atom
International Nuclear Information System (INIS)
Chubar, O.; Couprie, M.-E.
2007-01-01
CPU-efficient method for calculation of the frequency domain electric field of Coherent Synchrotron Radiation (CSR) taking into account 6D phase space distribution of electrons in a bunch is proposed. As an application example, calculation results of the CSR emitted by an electron bunch with small longitudinal and large transverse sizes are presented. Such situation can be realized in storage rings or ERLs by transverse deflection of the electron bunches in special crab-type RF cavities, i.e. using the technique proposed for the generation of femtosecond X-ray pulses (A. Zholents et. al., 1999). The computation, performed for the parameters of the SOLEIL storage ring, shows that if the transverse size of electron bunch is larger than the diffraction limit for single-electron SR at a given wavelength -- this affects the angular distribution of the CSR at this wavelength and reduces the coherent flux. Nevertheless, for transverse bunch dimensions up to several millimeters and a longitudinal bunch size smaller than hundred micrometers, the resulting CSR flux in the far infrared spectral range is still many orders of magnitude higher than the flux of incoherent SR, and therefore can be considered for practical use
Energy Technology Data Exchange (ETDEWEB)
Pimlott, Sally L. [Department of Clinical Physics, West of Scotland Radionuclide Dispensary, Western Infirmary, G11 6NT Glasgow (United Kingdom)], E-mail: s.pimlott@clinmed.gla.ac.uk; Stevenson, Louise [Department of Chemistry, WestCHEM, University of Glasgow, G12 8QQ Glasgow (United Kingdom); Wyper, David J. [Institute of Neurological Sciences, Southern General Hospital, G51 4TF Glasgow (United Kingdom); Sutherland, Andrew [Department of Chemistry, WestCHEM, University of Glasgow, G12 8QQ Glasgow (United Kingdom)
2008-07-15
Introduction: [{sup 123}I]I-PK11195 is a high-affinity single photon emission computed tomography radiotracer for peripheral benzodiazepine receptors that has previously been used to measure activated microglia and to assess neuroinflammation in the living human brain. This study investigates the radiosynthesis of [{sup 123}I]I-PK11195 in order to develop a rapid and efficient method that obtains [{sup 123}I]I-PK11195 with a high specific activity for in vivo animal and human imaging studies. Methods: The synthesis of [{sup 123}I]I-PK11195 was evaluated using a solid-state interhalogen exchange method and an electrophilic iododestannylation method, where bromine and trimethylstannyl derivatives were used as precursors, respectively. In the electrophilic iododestannylation method, the oxidants peracetic acid and chloramine-T were both investigated. Results: Electrophilic iododestannylation produced [{sup 123}I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than achievable using the halogen exchange method investigated. Using chloramine-T as oxidant provided a rapid and efficient method of choice for the synthesis of [{sup 123}I]I-PK11195. Conclusions: [{sup 123}I]I-PK11195 has been successfully synthesized via a rapid and efficient electrophilic iododestannylation method, producing [{sup 123}I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than previously achieved.
International Nuclear Information System (INIS)
Pimlott, Sally L.; Stevenson, Louise; Wyper, David J.; Sutherland, Andrew
2008-01-01
Introduction: [ 123 I]I-PK11195 is a high-affinity single photon emission computed tomography radiotracer for peripheral benzodiazepine receptors that has previously been used to measure activated microglia and to assess neuroinflammation in the living human brain. This study investigates the radiosynthesis of [ 123 I]I-PK11195 in order to develop a rapid and efficient method that obtains [ 123 I]I-PK11195 with a high specific activity for in vivo animal and human imaging studies. Methods: The synthesis of [ 123 I]I-PK11195 was evaluated using a solid-state interhalogen exchange method and an electrophilic iododestannylation method, where bromine and trimethylstannyl derivatives were used as precursors, respectively. In the electrophilic iododestannylation method, the oxidants peracetic acid and chloramine-T were both investigated. Results: Electrophilic iododestannylation produced [ 123 I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than achievable using the halogen exchange method investigated. Using chloramine-T as oxidant provided a rapid and efficient method of choice for the synthesis of [ 123 I]I-PK11195. Conclusions: [ 123 I]I-PK11195 has been successfully synthesized via a rapid and efficient electrophilic iododestannylation method, producing [ 123 I]I-PK11195 with a higher isolated radiochemical yield and a higher specific activity than previously achieved
Gaining Efficiency of Computational Experiments in Modeling the Flight Vehicle Movement
Directory of Open Access Journals (Sweden)
I. K. Romanova
2017-01-01
Full Text Available The paper considers one of the important aspects to gain efficiency of conducted computational experiments, namely to provide grid optimization. The problem solution will ultimately create a more perfect system, because just a multivariate simulation is a basis to apply optimization methods by the specified criteria and to identify problems in functioning of technical systems.The paper discusses a class of the moving objects, representing a body of revolution, which, for one reason or another, endures deformation of casing. Analyses using the author's techniques have shown that there are the following complex functional dependencies of aerodynamic characteristics of the studied class of deformed objects.Presents a literature review on new ways for organizing the calculations, data storage and transfer. Provides analysing the methods of forming grids, including those used in initial calculations and visualization of information. In addition to the regular grids, are offered unstructured grids, including those for dynamic spatial-temporal information. Attention is drawn to the problem of an efficient retrieval of information. The paper shows a relevant capability to run with large data volumes, including an OLAP technology, multidimensional cubes (Data Cube, and finally, an integrated Date Mining approach.Despite the huge number of successful modern approaches to the solution of problems of formation, storage and processing of multidimensional data, it should be noted that computationally these tools are quite expensive. Expenditure for using the special tools often exceeds the cost of directly conducted computational experiments as such. In this regard, it was recognized that it is unnecessary to abandon the use of traditional tools and focus on a direct increase of their efficiency. Within the framework of the applied problem under consideration such a tool was to form the optimal grids.The optimal grid was understood to be a grid in the N
Mittendorff, Kariene; Faber, Marike; Staman, Laura
2017-01-01
In order to lower dropout rates and stimulate student success in higher education, the Dutch government implemented a new law demanding that every higher education institute offer a matching activity to applying students. This article evaluates how students and teachers experience this matching activity. Data were collected in a Dutch university…
With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...
Energy Technology Data Exchange (ETDEWEB)
Knissel, J.
1999-12-15
The study investigates measures to reduce primary energy consumption in administrative and office buildings and their effects in terms of economic efficiency. An exemplary office building is modernised step by step while recording the changes in the primary energy consumption coefficient. [German] In der vorliegenden Studie wird untersucht, wie weit und mit welchen Massnahmen der Primaerenergiebedarf von Buero- und Verwaltungsgebaeuden gesenkt werden kann und welche Auswirkungen dies auf die Wirtschaftlichkeit hat. Hierzu wird die energetische Ausfuehrungsqualitaet eines einfachen Beispielgebaeudes schrittweise verbessert und die Veraenderung des Primaerenergiekennwertes ermittelt. (orig.)
Duan, Lili; Liu, Xiao; Zhang, John Z H
2016-05-04
Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.
Kunsting, Josef; Wirth, Joachim; Paas, Fred
2011-01-01
Using a computer-based scientific discovery learning environment on buoyancy in fluids we investigated the "effects of goal specificity" (nonspecific goals vs. specific goals) for two goal types (problem solving goals vs. learning goals) on "strategy use" and "instructional efficiency". Our empirical findings close an important research gap,…
Directory of Open Access Journals (Sweden)
Lianjie Zhou
2016-06-01
Full Text Available Comprehensive surface soil moisture (SM monitoring is a vital task in precision agriculture applications. SM monitoring includes remote sensing imagery monitoring and in situ sensor-based observational monitoring. Cloud computing can increase computational efficiency enormously. A geographical web service was developed to assist in agronomic decision making, and this tool can be scaled to any location and crop. By integrating cloud computing and the web service-enabled information infrastructure, this study uses the cloud computing-enabled spatio-temporal cyber-physical infrastructure (CESCI to provide an efficient solution for soil moisture monitoring in precision agriculture. On the server side of CESCI, diverse Open Geospatial Consortium web services work closely with each other. Hubei Province, located on the Jianghan Plain in central China, is selected as the remote sensing study area in the experiment. The Baoxie scientific experimental field in Wuhan City is selected as the in situ sensor study area. The results show that the proposed method enhances the efficiency of remote sensing imagery mapping and in situ soil moisture interpolation. In addition, the proposed method is compared to other existing precision agriculture infrastructures. In this comparison, the proposed infrastructure performs soil moisture mapping in Hubei Province in 1.4 min and near real-time in situ soil moisture interpolation in an efficient manner. Moreover, an enhanced performance monitoring method can help to reduce costs in precision agriculture monitoring, as well as increasing agricultural productivity and farmers’ net-income.
Tarim, S.A.; Ozen, U.; Dogru, M.K.; Rossi, R.
2011-01-01
We provide an efficient computational approach to solve the mixed integer programming (MIP) model developed by Tarim and Kingsman [8] for solving a stochastic lot-sizing problem with service level constraints under the static–dynamic uncertainty strategy. The effectiveness of the proposed method
Shen, Fei; Tian, Libin; Yuan, Hairong; Pang, Yunzhi; Chen, Shulin; Zou, Dexun; Zhu, Baoning; Liu, Yanping; Li, Xiujin
2013-10-01
As a lignocellulose-based substrate for anaerobic digestion, rice straw is characterized by low density, high water absorbability, and poor fluidity. Its mixing performances in digestion are completely different from traditional substrates such as animal manures. Computational fluid dynamics (CFD) simulation was employed to investigate mixing performances and determine suitable stirring parameters for efficient biogas production from rice straw. The results from CFD simulation were applied in the anaerobic digestion tests to further investigate their reliability. The results indicated that the mixing performances could be improved by triple impellers with pitched blade, and complete mixing was easily achieved at the stirring rate of 80 rpm, as compared to 20-60 rpm. However, mixing could not be significantly improved when the stirring rate was further increased from 80 to 160 rpm. The simulation results agreed well with the experimental results. The determined mixing parameters could achieve the highest biogas yield of 370 mL (g TS)(-1) (729 mL (g TS(digested))(-1)) and 431 mL (g TS)(-1) (632 mL (g TS(digested))(-1)) with the shortest technical digestion time (T 80) of 46 days. The results obtained in this work could provide useful guides for the design and operation of biogas plants using rice straw as substrates.
Efficient 2-D DCT Computation from an Image Representation Point of View
Papakostas, G.A.; Koulouriotis, D.E.; Karakasis, E.G.
2009-01-01
A novel methodology that ensures the computation of 2-D DCT coefficients in gray-scale images as well as in binary ones, with high computation rates, was presented in the previous sections. Through a new image representation scheme, called ISR (Image Slice Representation) the 2-D DCT coefficients can be computed in significantly reduced time, with the same accuracy.
Targeting an efficient target-to-target interval for P300 speller brain–computer interfaces
Sellers, Eric W.; Wang, Xingyu
2013-01-01
Longer target-to-target intervals (TTI) produce greater P300 event-related potential amplitude, which can increase brain–computer interface (BCI) classification accuracy and decrease the number of flashes needed for accurate character classification. However, longer TTIs requires more time for each trial, which will decrease the information transfer rate of BCI. In this paper, a P300 BCI using a 7 × 12 matrix explored new flash patterns (16-, 18- and 21-flash pattern) with different TTIs to assess the effects of TTI on P300 BCI performance. The new flash patterns were designed to minimize TTI, decrease repetition blindness, and examine the temporal relationship between each flash of a given stimulus by placing a minimum of one (16-flash pattern), two (18-flash pattern), or three (21-flash pattern) non-target flashes between each target flashes. Online results showed that the 16-flash pattern yielded the lowest classification accuracy among the three patterns. The results also showed that the 18-flash pattern provides a significantly higher information transfer rate (ITR) than the 21-flash pattern; both patterns provide high ITR and high accuracy for all subjects. PMID:22350331
A memory efficient user interface for CLIPS micro-computer applications
Sterle, Mark E.; Mayer, Richard J.; Jordan, Janice A.; Brodale, Howard N.; Lin, Min-Jin
1990-01-01
The goal of the Integrated Southern Pine Beetle Expert System (ISPBEX) is to provide expert level knowledge concerning treatment advice that is convenient and easy to use for Forest Service personnel. ISPBEX was developed in CLIPS and delivered on an IBM PC AT class micro-computer, operating with an MS/DOS operating system. This restricted the size of the run time system to 640K. In order to provide a robust expert system, with on-line explanation, help, and alternative actions menus, as well as features that allow the user to back up or execute 'what if' scenarios, a memory efficient menuing system was developed to interface with the CLIPS programs. By robust, we mean an expert system that (1) is user friendly, (2) provides reasonable solutions for a wide variety of domain specific problems, (3) explains why some solutions were suggested but others were not, and (4) provides technical information relating to the problem solution. Several advantages were gained by using this type of user interface (UI). First, by storing the menus on the hard disk (instead of main memory) during program execution, a more robust system could be implemented. Second, since the menus were built rapidly, development time was reduced. Third, the user may try a new scenario by backing up to any of the input screens and revising segments of the original input without having to retype all the information. And fourth, asserting facts from the menus provided for a dynamic and flexible fact base. This UI technology has been applied successfully in expert systems applications in forest management, agriculture, and manufacturing. This paper discusses the architecture of the UI system, human factors considerations, and the menu syntax design.
International Nuclear Information System (INIS)
Dall'agnol, Cristina; Barletta, Fernando Branco; Hartmann, Mateus Silveira Martins
2008-01-01
This study evaluated the efficiency of different techniques for removal of filling material from root canals, using computed tomography (CT). Sixty mesial roots from extracted human mandibular molars were used. Root canals were filled and, after 6 months, the teeth were randomly assigned to 3 groups, according to the root-filling removal technique: Group A - hand instrumentation with K-type files; Group B - reciprocating instrumentation with engine-driven K-type files; and Group C rotary instrumentation with engine-driven ProTaper system. CT scans were used to assess the volume of filling material inside the root canals before and after the removal procedure. In both moments, the area of filling material was outlined by an experienced radiologist and the volume of filling material was automatically calculated by the CT software program. Based on the volume of initial and residual filling material of each specimen, the percentage of filling material removed from the root canals by the different techniques was calculated. Data were analyzed statistically by ANOVA and chi-square test for linear trend (α=0.05). No statistically significant difference (p=0.36) was found among the groups regarding the percent means of removed filling material. The analysis of the association between the percentage of filling material removal (high or low) and the proposed techniques by chi-square test showed statistically significant difference (p=0.015), as most cases in group B (reciprocating technique) presented less than 50% of filling material removed (low percent removal). In conclusion, none of the techniques evaluated in this study was effective in providing complete removal of filling material from the root canals. (author)
Energy Technology Data Exchange (ETDEWEB)
Dall' agnol, Cristina; Barletta, Fernando Branco [Lutheran University of Brazil, Canoas, RS (Brazil). Dental School. Dept. of Dentistry and Endodontics]. E-mail: fbarletta@terra.com.br; Hartmann, Mateus Silveira Martins [Uninga Dental School, Passo Fundo, RS (Brazil). Postgraduate Program in Dentistry
2008-07-01
This study evaluated the efficiency of different techniques for removal of filling material from root canals, using computed tomography (CT). Sixty mesial roots from extracted human mandibular molars were used. Root canals were filled and, after 6 months, the teeth were randomly assigned to 3 groups, according to the root-filling removal technique: Group A - hand instrumentation with K-type files; Group B - reciprocating instrumentation with engine-driven K-type files; and Group C rotary instrumentation with engine-driven ProTaper system. CT scans were used to assess the volume of filling material inside the root canals before and after the removal procedure. In both moments, the area of filling material was outlined by an experienced radiologist and the volume of filling material was automatically calculated by the CT software program. Based on the volume of initial and residual filling material of each specimen, the percentage of filling material removed from the root canals by the different techniques was calculated. Data were analyzed statistically by ANOVA and chi-square test for linear trend ({alpha}=0.05). No statistically significant difference (p=0.36) was found among the groups regarding the percent means of removed filling material. The analysis of the association between the percentage of filling material removal (high or low) and the proposed techniques by chi-square test showed statistically significant difference (p=0.015), as most cases in group B (reciprocating technique) presented less than 50% of filling material removed (low percent removal). In conclusion, none of the techniques evaluated in this study was effective in providing complete removal of filling material from the root canals. (author)
Directory of Open Access Journals (Sweden)
Chia-Chang Hu
2005-04-01
Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, Ã°ÂÂ’Âª(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of Ã°ÂÂ’Âª((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.
International Nuclear Information System (INIS)
Quinn, J.J.
1996-01-01
Geostatistical analysis of hydraulic head data is useful in producing unbiased contour plots of head estimates and relative errors. However, at most sites being characterized, monitoring wells are generally present at different densities, with clusters of wells in some areas and few wells elsewhere. The problem that arises when kriging data at different densities is in achieving adequate resolution of the grid while maintaining computational efficiency and working within software limitations. For the site considered, 113 data points were available over a 14-mi 2 study area, including 57 monitoring wells within an area of concern of 1.5 mi 2 . Variogram analyses of the data indicate a linear model with a negligible nugget effect. The geostatistical package used in the study allows a maximum grid of 100 by 100 cells. Two-dimensional kriging was performed for the entire study area with a 500-ft grid spacing, while the smaller zone was modeled separately with a 100-ft spacing. In this manner, grid cells for the dense area and the sparse area remained small relative to the well separation distances, and the maximum dimensions of the program were not exceeded. The spatial head results for the detailed zone were then nested into the regional output by use of a graphical, object-oriented database that performed the contouring of the geostatistical output. This study benefitted from the two-scale approach and from very fine geostatistical grid spacings relative to typical data separation distances. The combining of the sparse, regional results with those from the finer-resolution area of concern yielded contours that honored the actual data at every measurement location. The method applied in this study can also be used to generate reproducible, unbiased representations of other types of spatial data
Directory of Open Access Journals (Sweden)
M. G. Solodkaya
2015-01-01
Full Text Available The road-transport complex objectively reflects the essence of efficient transportation process which is carried out by transport facilities along the highways. The complex emphasizes an equivalent contribution of transport facilities and highways in a unified transportation process. Efficiency of the state economy rigidly depends on availability of the developed and well-functioning network of highways. Countries with the developed economy which have generally finished creation of national highway networks continue to invest money in public road systems that stimulates development of industrial sectors, agriculture and trade, etc. Their progress and efficient functioning is possible only with the balanced, overall development of the road-transport complex of the country. Functioning of the road-transport complex is inextricable connected with the operation of automotive transport and road infrastructure. Interaction of these two components of the unified economic system is determined by technical characteristics of the automotive transport and transport and operational indices of the highways. Development of methods for optimum organization of management for functioning of the road complex is considered as an important problem of the national economy while forming market economy mechanisms. Further growth of capital expenditures including investments will be needed in order to ensure such road conditions that meet the requirements of modern and perspective road traffic. Management of the highway network conditions presupposes a selection of such set of regulatory impacts on road conditions which will allow to minimize expenses in the road-transport complex. Elaboration and realization of the most efficient repair measures serve as such regulatory impact. The purpose is achieved while solving the problem pertaining to minimization of expenses on transportations in the road-transport complex in the process of the realization of the most
Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish
2014-05-01
the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.
Directory of Open Access Journals (Sweden)
Roland Schwald
2011-07-01
Full Text Available The aim of this paper is to present a curriculum development concept for teaching information systems content to students enrolled in non-computer science programs by presenting examples from the Business Administration programs at Albstadt-Sigmaringen University, a state university located in Southern Germany. The main focus of this paper therefore is to describe this curriculum development concept. Since this concept involves two disciplines, i.e. business administration and computer science, the author argues that it is necessary to define the roles of one discipline for the other and gives an example on how this could be done. The paper acknowledges that the starting point for the development of a curriculum such as one for a business administration program will be the requirements of the potential employers of the graduates. The paper continues to recommend the assignment of categorized skills and qualifications, such as knowledge, social, methodological, and decision making skills to the different parts of the curricula in question for the development of such a curriculum concept. After the mapping of skills and courses the paper describes how specific information systems can be used in courses, especially those with a specific focus on methodological skills. Two examples from Albstadt-Sigma-ringen University are being given. At the end of the paper the author explains the implications and limitations of such a concept, especially for programs that build on each other, as is the case for some Bachelor and Master programs. The paper concludes that though some elements of this concept are transferable, it is still necessary that every institution of higher education has to take into consideration its own situation to develop curricula concepts. It provides recommendations what issues every institution should solve for itself.
Dongen, van Laura H.; Kiel, Douglas P.; Soedamah-Muthu, Sabita S.; Bouxsein, Mary L.; Hannan, Marian T.; Sahni, Shivani
2018-01-01
Previous studies found that dairy foods were associated with higher areal bone mineral density (BMD). However, data on bone geometry or compartment-specific bone density is lacking. In this cross-sectional study, the association of milk, yogurt, cheese, cream, milk+yogurt, and milk+yogurt+cheese
International Nuclear Information System (INIS)
Zhao Jianheng; Sun Chengwei; Ma Ruchao
2001-01-01
The new design of optical fiber VISAR probes has been described. It consists of optical fibers and two lenses, and has simpler structure and lower cost than others. If the size of the image near the end face of collecting fiber is larger than the diameter of the core fiber, the distance between the probe and the target can be decreased. Requirement for the precision in the design is lower in this way, and is easier to build up. At the same time, the signal-collecting efficiency is improved in some degree. During the process of designing the probe, the technique for manufacturing the lens of plexiglass is developed. The lens of plexiglass is used to replace the lens of glass, which can reduce the cost of the probe. The factors affecting the collecting efficiency are analyzed
I. Fisk
2010-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...
Energy Technology Data Exchange (ETDEWEB)
Klimstra, J.
2007-07-01
The electricity sector is the single largest user of primary energy in the world. The issues of fuel prices, security of supply and greenhouse gas emissions are therefore closely connected with electricity generation. The total energy efficiency of the electricity sector is only 32.5% so that quick improvements are required. However, the uncertainty over fuel prices and technology preferences is such that most investors are hesitant. The life of existing, often low-efficiency, power plants is therefore extended. At the same time, the demand for electricity is rapidly increasing and the gap between capacity and demand decreases. This paper intends to bring more clarity into the economic and environmental boundary conditions of power plants. The goal is to find an attractive way for rapid efficiency improvement with an even better system reliability without increasing the costs. The paper discusses fuel price developments and the costs of generating technologies in connection with the typical demand pattern of electricity. Ultimately, it appears that local generation, preferably coupled with cogeneration, can be an important part of the solution. (auth)
M. Kasemann
Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...
M. Kasemann
CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes. Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...
Energy Technology Data Exchange (ETDEWEB)
Mayer, Edgar [CentraLine c/o Honeywell GmbH, Schoenaich (Germany)
2009-07-01
With smart control systems, the energy conservation potential of industrial buildings could be fully utilized. This means, e.g., that classic control algorithms must be replaced by new solutions. New methods will ensure higher energy efficiency with maximum comfort; they will also prolong the service life and the inspection intervals of the technical facilities. (orig.)
DEFF Research Database (Denmark)
Emamifar, Amir; van Bui Hansen, Morten Hai; Jensen Hansen, Inger Marie
2017-01-01
To elucidate the difference between ratios of nurse consultation sought by senior rheumatologists and junior physicians in rheumatology residency training, and also to evaluate physician efficiency index respecting patients with rheumatoid arthritis (RA). Data regarding outpatient visits for RA...... patients between November 2013 and 2015 were extracted. The mean interval (day) between consultations, the nurse/physician visits ratio, and physician efficiency index (nurse/physician visits ratio × mean interval) for each senior and junior physicians were calculated. Disease Activity Score in 28 joints....../physician visits ratio (P = .01) and mean efficiency index (P = .04) of senior rheumatologists were significantly higher than that of junior physicians. Regression analysis showed a positive correlation between physician postgraduate experience and physician efficiency index adjusted for DAS28 at baseline...
Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2012-01-10
The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.
Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2008-01-01
The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.
International Nuclear Information System (INIS)
Allan, Grant; Hanley, Nick; McGregor, Peter; Swales, Kim; Turner, Karen
2007-01-01
The conventional wisdom is that improving energy efficiency will lower energy use. However, there is an extensive debate in the energy economics/policy literature concerning 'rebound' effects. These occur because an improvement in energy efficiency produces a fall in the effective price of energy services. The response of the economic system to this price fall at least partially offsets the expected beneficial impact of the energy efficiency gain. In this paper we use an economy-energy-environment computable general equilibrium (CGE) model for the UK to measure the impact of a 5% across the board improvement in the efficiency of energy use in all production sectors. We identify rebound effects of the order of 30-50%, but no backfire (no increase in energy use). However, these results are sensitive to the assumed structure of the labour market, key production elasticities, the time period under consideration and the mechanism through which increased government revenues are recycled back to the economy
Tajnikar, Maks; Debevec, Jasmina
2008-01-01
The present paper tackles the issue of the higher education funding system in Slovenia. Its main attribute is that institutions are classified into study groups according to their fields of education, and funds granted by the state are based on their weights or study group factors (SGF). Analysis conducted using data envelopment analysis tested…
International Nuclear Information System (INIS)
Montagu, K.D.; Woo, K.C.; Puangchit, L.
1999-01-01
Full text: Determining the water-use-efficiency of trees in relation lo wood production is problematic due to the sheer size of the plant and the number of years taken to produce the wood. Indirect measures of water-use-efficiency, such as carbon isotope discrimination (Δ), are therefore attractive to tree breeders wishing to select for increased water-use-efficiency. To begin to evaluate the usefulness of Δ as a selection parameter for the tropical tree Acacia auriculiformis we addressed the following questions: 1. Within the tree canopy, how variable is Δ? 2. Is there any genotypic variation in Δ? and 3. Does water availability affect genotypic variation? To address these questions we sampled foliage from pot trials and field trials of A. auriculiformis ranging in age from 3 months lo 8 years in Australia and Thailand. In 16-18m high 8-year-old trees, canopy variation in Δ was large (P>0.01). Foliage Δ values increased down the tree from 22.0 %o at the top to 24.7 %o at the base. The decrease was rapid in the lop 3 m of the canopy thus considerable care must be taken to sampling foliage from the same position in the canopy. Genotype variations in Δ was observed in seedlings and 2 year-old trees (P>0.01) but not in 8 year-old trees (P=0.60). Where genotypic variation were observed the differences between the lowest and highest values were 2.2 - 3.6 %o. Reduced water availability decreased Δ values in both pot and field studies but not in a consistent way across seedlots. Thus it would appear that the Δ of trees grown under favourable conditions does not give an indication of the Δ value which will be obtained under water-limited conditions. This complicates the use of Δ as a screening method. We have clearly shown that genotype variation occurs in A. auriculiformis in both seedlings and young field-grown trees. Considerable care is required when sampling large trees, as variation in Δ within the tree can be as large as between genotypes. The challenge
Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics
Kordy, Michal Adam
The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the
Spatially Uniform ReliefF (SURF for computationally-efficient filtering of gene-gene interactions
Directory of Open Access Journals (Sweden)
Greene Casey S
2009-09-01
Full Text Available Abstract Background Genome-wide association studies are becoming the de facto standard in the genetic analysis of common human diseases. Given the complexity and robustness of biological networks such diseases are unlikely to be the result of single points of failure but instead likely arise from the joint failure of two or more interacting components. The hope in genome-wide screens is that these points of failure can be linked to single nucleotide polymorphisms (SNPs which confer disease susceptibility. Detecting interacting variants that lead to disease in the absence of single-gene effects is difficult however, and methods to exhaustively analyze sets of these variants for interactions are combinatorial in nature thus making them computationally infeasible. Efficient algorithms which can detect interacting SNPs are needed. ReliefF is one such promising algorithm, although it has low success rate for noisy datasets when the interaction effect is small. ReliefF has been paired with an iterative approach, Tuned ReliefF (TuRF, which improves the estimation of weights in noisy data but does not fundamentally change the underlying ReliefF algorithm. To improve the sensitivity of studies using these methods to detect small effects we introduce Spatially Uniform ReliefF (SURF. Results SURF's ability to detect interactions in this domain is significantly greater than that of ReliefF. Similarly SURF, in combination with the TuRF strategy significantly outperforms TuRF alone for SNP selection under an epistasis model. It is important to note that this success rate increase does not require an increase in algorithmic complexity and allows for increased success rate, even with the removal of a nuisance parameter from the algorithm. Conclusion Researchers performing genetic association studies and aiming to discover gene-gene interactions associated with increased disease susceptibility should use SURF in place of ReliefF. For instance, SURF should be
Implementation of GAMMON - An efficient load balancing strategy for a local computer system
Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.
1989-01-01
GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.
Energy Technology Data Exchange (ETDEWEB)
Kim, S. H.; Kim, M. H. [Silla University, Busan (Korea, Republic of)
2009-07-01
In recent years, since the continuing increase in the capacity of in personal computer such as the optimal performance, high quality and high resolution image, the computer system's components produce large amounts of heat during operation. This study analyzes and investigates an ability and efficiency of the cooling system inside the computer by means of Central Processing Unit (CPU) and power supply cooling fan. This research was conducted for increasing an ability of the cooling system inside the computer by making a structure which produces different air pressures in an air inflow tube. Consequently, when temperatures of the CPU and room inside computer were compared with a general personal computer, temperatures of the tested CPU, the room and the heat sink were as low as 5 .deg. C, 2.5 .deg. C and 7 .deg. C respectively. In addition to, Revolution Per Minute (RPM) was shown as low as 250 after 1 hour operation. This research explored the possibility of enhancing the effective cooling of high-performance computer systems.
International Nuclear Information System (INIS)
Sulbhewar, Litesh N; Raveendranath, P
2014-01-01
An efficient piezoelectric smart beam finite element based on Reddy’s third-order displacement field and layerwise linear potential is presented here. The present formulation is based on the coupled polynomial field interpolation of variables, unlike conventional piezoelectric beam formulations that use independent polynomials. Governing equations derived using a variational formulation are used to establish the relationship between field variables. The resulting expressions are used to formulate coupled shape functions. Starting with an assumed cubic polynomial for transverse displacement (w) and a linear polynomial for electric potential (φ), coupled polynomials for axial displacement (u) and section rotation (θ) are found. This leads to a coupled quadratic polynomial representation for axial displacement (u) and section rotation (θ). The formulation allows accommodation of extension–bending, shear–bending and electromechanical couplings at the interpolation level itself, in a variationally consistent manner. The proposed interpolation scheme is shown to eliminate the locking effects exhibited by conventional independent polynomial field interpolations and improve the convergence characteristics of HSDT based piezoelectric beam elements. Also, the present coupled formulation uses only three mechanical degrees of freedom per node, one less than the conventional formulations. Results from numerical test problems prove the accuracy and efficiency of the present formulation. (paper)
Achieving 99.9% proton spin-flip efficiency at higher energy with a small rf dipole
Leonova, M A; Gebel, R; Hinterberger, F; Krisch, A D; Lehrach, A; Lorentz, B; Maier, R; Morozov, V S; Prasuhn, D; Raymond, R S; Schnase, A; Stockhorst, H; Ulbrich, K; Wong, V K; 10.1103/PhysRevLett.93.224801
2004-01-01
We recently used a new ferrite rf dipole to study spin flipping of a 2.1 GeV/c vertically polarized proton beam stored in the COSY Cooler Synchrotron in Julich, Germany. We swept the rf dipole's frequency through an rf-induced spin resonance to flip the beam's polarization direction. After determining the resonance's frequency, we varied the frequency range, frequency ramp time, and number of flips. At the rf dipole's maximum strength and optimum frequency range and ramp time, we measured a spin-flip efficiency of 99.92+or-0.04%. This result, along with a similar 0.49 GeV/c IUCF result, indicates that, due to the Lorentz invariance of an rf dipole's transverse integral Bdl and the weak energy dependence of its spin-resonance strength, an only 35% stronger rf dipole should allow efficient spin flipping in the 100 GeV BNL RHIC Collider or even the 7 TeV CERN Large Hadron Collider.
Efficient Quantification of Uncertainties in Complex Computer Code Results, Phase I
National Aeronautics and Space Administration — This proposal addresses methods for efficient quantification of margins and uncertainties (QMU) for models that couple multiple, large-scale commercial or...
Balsara, Dinshaw S.; Garain, Sudip; Taflove, Allen; Montecinos, Gino
2018-02-01
The Finite Difference Time Domain (FDTD) scheme has served the computational electrodynamics community very well and part of its success stems from its ability to satisfy the constraints in Maxwell's equations. Even so, in the previous paper of this series we were able to present a second order accurate Godunov scheme for computational electrodynamics (CED) which satisfied all the same constraints and simultaneously retained all the traditional advantages of Godunov schemes. In this paper we extend the Finite Volume Time Domain (FVTD) schemes for CED in material media to better than second order of accuracy. From the FDTD method, we retain a somewhat modified staggering strategy of primal variables which enables a very beneficial constraint-preservation for the electric displacement and magnetic induction vector fields. This is accomplished with constraint-preserving reconstruction methods which are extended in this paper to third and fourth orders of accuracy. The idea of one-dimensional upwinding from Godunov schemes has to be significantly modified to use the multidimensionally upwinded Riemann solvers developed by the first author. In this paper, we show how they can be used within the context of a higher order scheme for CED. We also report on advances in timestepping. We show how Runge-Kutta IMEX schemes can be adapted to CED even in the presence of stiff source terms brought on by large conductivities as well as strong spatial variations in permittivity and permeability. We also formulate very efficient ADER timestepping strategies to endow our method with sub-cell resolving capabilities. As a result, our method can be stiffly-stable and resolve significant sub-cell variation in the material properties within a zone. Moreover, we present ADER schemes that are applicable to all hyperbolic PDEs with stiff source terms and at all orders of accuracy. Our new ADER formulation offers a treatment of stiff source terms that is much more efficient than previous ADER
DEFF Research Database (Denmark)
Blasques, José Pedro Albergaria Amaral; Bitsche, Robert
2015-01-01
This paper proposes a novel, efficient, and accurate framework for fracture analysis of beam structures with longitudinal cracks. The three-dimensional local stress field is determined using a high-fidelity beam model incorporating a finite element based cross section analysis tool. The Virtual...... Crack Closure Technique is used for computation of strain energy release rates. The devised framework was employed for analysis of cracks in beams with different cross section geometries. The results show that the accuracy of the proposed method is comparable to that of conventional three......-dimensional solid finite element models while using only a fraction of the computation time....
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †
Directory of Open Access Journals (Sweden)
Muhammad Harist Murdani
2018-03-01
Full Text Available In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc and neighborhood proximity (Top-K. Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.
Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee
2018-03-24
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.
Energy Technology Data Exchange (ETDEWEB)
Lee, Jun Yeob; Jeong, Jae Jun [School of Mechanical Engineering, Pusan National University, Busan (Korea, Republic of); Suh, Jae Seung [System Engineering and Technology Co., Daejeon (Korea, Republic of); Kim, Kyung Doo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
During the development process of a thermal-hydraulic system code, a non-regression test (NRT) must be performed repeatedly in order to prevent software regression. The NRT process, however, is time-consuming and labor-intensive. Thus, automation of this process is an ideal solution. In this study, we have developed a program to support an efficient NRT for the SPACE code and demonstrated its usability. This results in a high degree of efficiency for code development. The program was developed using the Visual Basic for Applications and designed so that it can be easily customized for the NRT of other computer codes.
Directory of Open Access Journals (Sweden)
Rex A. Omonode
2017-06-01
Full Text Available Few studies have assessed the common, yet unproven, hypothesis that an increase of plant nitrogen (N uptake and/or recovery efficiency (NRE will reduce nitrous oxide (N2O emission during crop production. Understanding the relationships between N2O emissions and crop N uptake and use efficiency parameters can help inform crop N management recommendations for both efficiency and environmental goals. Analyses were conducted to determine which of several commonly used crop N uptake-derived parameters related most strongly to growing season N2O emissions under varying N management practices in North American maize systems. Nitrogen uptake-derived variables included total aboveground N uptake (TNU, grain N uptake (GNU, N recovery efficiency (NRE, net N balance (NNB in relation to GNU [NNB(GNU] and TNU [NNB(TNU], and surplus N (SN. The relationship between N2O and N application rate was sigmoidal with relatively small emissions for N rates <130 kg ha−1, and a sharp increase for N rates from 130 to 220 kg ha−1; on average, N2O increased linearly by about 5 g N per kg of N applied for rates up to 220 kg ha−1. Fairly strong and significant negative relationships existed between N2O and NRE when management focused on N application rate (r2 = 0.52 or rate and timing combinations (r2 = 0.65. For every percentage point increase, N2O decreased by 13 g N ha−1 in response to N rates, and by 20 g N ha−1 for NRE changes in response to rate-by-timing treatments. However, more consistent positive relationships (R2 = 0.73–0.77 existed between N2O and NNB(TNU, NNB(GNU, and SN, regardless of rate and timing of N application; on average N2O emission increased by about 5, 7, and 8 g N, respectively, per kg increase of NNB(GNU, NNB(TNU, and SN. Neither N source nor placement influenced the relationship between N2O and NRE. Overall, our analysis indicated that a careful selection of appropriate N rate applied at the right time can both increase NRE and reduce N
Energy Technology Data Exchange (ETDEWEB)
Fercu, M.; Kistler, R. [Hochschule Luzern Technik und Architektur, CEESAR - iHomeLab, Horw (Switzerland); Egli, A. [Hochschule Luzern Technik und Architektur, ISIS, Horw (Switzerland); Gallati, J. [Hochschule Luzern Technik und Architektur, Wirtschaft, Horw (Switzerland)
2010-09-15
Individuals are empowered to conserve energy and natural resources when provided with motivational and personalized information on its use. By presenting information about the energy consumption from the home's energy meters along with recommended actions, the residential customer becomes aware of how in/efficiently energy is consumed within his home and can decide on how to act to conserve. This information can provide an accurate metric of how effective a conservation action is even to inhabitants that are not yet knowledgeable about or self-motivated by the monetary and ecologic rewards of conserving. This project was designed to build knowledge on technically and economically feasible ways to create an awareness of energy (especially electricity) for the sake of conservation. Specifically, it implements an exemplar prototype of a highly effective energy feedback system that is an interactive, real-time, in-home display. Toward this goal, four system architecture configuration proposals, a set of system requirements, and ideal system features are synthesized; they are based on the results of the research that evaluates effectiveness of existing energy-efficiency and -conservation methods and studies related technologies. Three of the four systems proposed represent energy technologies expected to be available within the next decade. The fourth system proposal is a demonstration prototype designed for implementation in the iHomeLab. This prototype is an open, modular, robust, cross-platform software framework that collects data, processes, and presents it interactively and visually on hardware available in most households. The results of this project both indicate that the creation of such energy feedback systems appear beneficial and also provide guidelines for their design. However, further development of infrastructure and elaboration of design is foreseen as necessary for this system to be suitable for mass deployment. (author)
Energy Technology Data Exchange (ETDEWEB)
Fercu, M.; Kistler, R. [Hochschule Luzern Technik und Architektur, CEESAR - iHomeLab, Horw (Switzerland); Egli, A. [Hochschule Luzern Technik und Architektur, ISIS, Horw (Switzerland); Gallati, J. [Hochschule Luzern Technik und Architektur, Wirtschaft, Horw (Switzerland)
2010-09-15
Individuals are empowered to conserve energy and natural resources when provided with motivational and personalized information on its use. By presenting information about the energy consumption from the home's energy meters along with recommended actions, the residential customer becomes aware of how in/efficiently energy is consumed within his home and can decide on how to act to conserve. This information can provide an accurate metric of how effective a conservation action is even to inhabitants that are not yet knowledgeable about or self-motivated by the monetary and ecologic rewards of conserving. This project was designed to build knowledge on technically and economically feasible ways to create an awareness of energy (especially electricity) for the sake of conservation. Specifically, it implements an exemplar prototype of a highly effective energy feedback system that is an interactive, real-time, in-home display. Toward this goal, four system architecture configuration proposals, a set of system requirements, and ideal system features are synthesized; they are based on the results of the research that evaluates effectiveness of existing energy-efficiency and -conservation methods and studies related technologies. Three of the four systems proposed represent energy technologies expected to be available within the next decade. The fourth system proposal is a demonstration prototype designed for implementation in the iHomeLab. This prototype is an open, modular, robust, cross-platform software framework that collects data, processes, and presents it interactively and visually on hardware available in most households. The results of this project both indicate that the creation of such energy feedback systems appear beneficial and also provide guidelines for their design. However, further development of infrastructure and elaboration of design is foreseen as necessary for this system to be suitable for mass deployment. (author)
I. Fisk
2011-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...
Energy Technology Data Exchange (ETDEWEB)
Janneck, E.; Glombitza, F. [G.E.O.S. Freiberg Ingenieurgesellschaft mbH, Freiberg (Germany); Schlee, K.; Arnold, I. [Vattenfall Europe Mining AG, Cottbus (Germany)
2006-07-01
The article presents experiences and results of the application of new aerator-systems in the mine water treatment. The processes of ferrous iron oxidation and sludge removal became more stable and efficiently by the application of the aerators. For the first time, spiral aerators were used in the Lower Lusatia lignite mining district to clean ferrous iron containing mine water. These devices lead to an enhanced iron oxidation rate under the existing conditions, where the oxygen diffusion is the rate determining step. Furthermore, the application caused increased throughput, optimal lime utilisation and better sludge thickening, which led to a higher efficiency of the mine water treatment. (orig.)
I. Fisk
2013-01-01
Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites. Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month. Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB. Figure 3: The volume of data moved between CMS sites in the last six months The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...
Carvalho, D F; Delgado, V; Albert, J N; Bellas, N; Javello, J; Miere, Y; Ruffinoni, D; Smith, G
1996-01-01
Large Scientific Equipments are controlled by Computer System whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, thhe sophistication of its trearment and, on the over hand by the fast evolution of the computer and network market. Some people call them generically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this frame- work the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is to integrate the various functions of DCCS monitoring into one general purpose Multi-layer ...
Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.
Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.
Mulder, T. E.; Baatsen, M. L.J.; Wubs, F.W.; Dijkstra, H. A.
2017-01-01
In the field of paleoceanographic modeling, the different positioning of Earth's continental configurations is often a major challenge for obtaining equilibrium ocean flow solutions. In this paper, we introduce numerical parameter continuation techniques to compute equilibrium solutions of ocean
Assessing the efficiency of information protection systems in the computer systems and networks
Nachev, Atanas; Zhelezov, Stanimir
2015-01-01
The specific features of the information protection systems in the computer systems and networks require the development of non-trivial methods for their analysis and assessment. Attempts for solutions in this area are given in this paper.
Nam, Junghyun; Choo, Kim-Kwang Raymond; Han, Sangchul; Kim, Moonseong; Paik, Juryon; Won, Dongho
2015-01-01
A smart-card-based user authentication scheme for wireless sensor networks (hereafter referred to as a SCA-WSN scheme) is designed to ensure that only users who possess both a smart card and the corresponding password are allowed to gain access to sensor data and their transmissions. Despite many research efforts in recent years, it remains a challenging task to design an efficient SCA-WSN scheme that achieves user anonymity. The majority of published SCA-WSN schemes use only lightweight cryptographic techniques (rather than public-key cryptographic techniques) for the sake of efficiency, and have been demonstrated to suffer from the inability to provide user anonymity. Some schemes employ elliptic curve cryptography for better security but require sensors with strict resource constraints to perform computationally expensive scalar-point multiplications; despite the increased computational requirements, these schemes do not provide user anonymity. In this paper, we present a new SCA-WSN scheme that not only achieves user anonymity but also is efficient in terms of the computation loads for sensors. Our scheme employs elliptic curve cryptography but restricts its use only to anonymous user-to-gateway authentication, thereby allowing sensors to perform only lightweight cryptographic operations. Our scheme also enjoys provable security in a formal model extended from the widely accepted Bellare-Pointcheval-Rogaway (2000) model to capture the user anonymity property and various SCA-WSN specific attacks (e.g., stolen smart card attacks, node capture attacks, privileged insider attacks, and stolen verifier attacks).
Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.
2010-01-01
Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, of...
Zhang, Haibin; Wang, Yan; Zhang, Xiuzhen; Lim, Ee-Peng
2013-01-01
In e-commerce environments, the trustworthiness of a seller is utterly important to potential buyers, especially when the seller is unknown to them. Most existing trust evaluation models compute a single value to reflect the general trust level of a seller without taking any transaction context information into account. In this paper, we first present a trust vector consisting of three values for Contextual Transaction Trust (CTT). In the computation of three CTT values, the identified three ...
Efficient computation of global sensitivity indices using sparse polynomial chaos expansions
International Nuclear Information System (INIS)
Blatman, Geraud; Sudret, Bruno
2010-01-01
Global sensitivity analysis aims at quantifying the relative importance of uncertain input variables onto the response of a mathematical model of a physical system. ANOVA-based indices such as the Sobol' indices are well-known in this context. These indices are usually computed by direct Monte Carlo or quasi-Monte Carlo simulation, which may reveal hardly applicable for computationally demanding industrial models. In the present paper, sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices. An adaptive algorithm allows the analyst to build up a PC-based metamodel that only contains the significant terms whereas the PC coefficients are computed by least-square regression using a computer experimental design. The accuracy of the metamodel is assessed by leave-one-out cross validation. Due to the genuine orthogonality properties of the PC basis, ANOVA-based sensitivity indices are post-processed analytically. This paper also develops a bootstrap technique which eventually yields confidence intervals on the results. The approach is illustrated on various application examples up to 21 stochastic dimensions. Accurate results are obtained at a computational cost 2-3 orders of magnitude smaller than that associated with Monte Carlo simulation.
Efficient computation of global sensitivity indices using sparse polynomial chaos expansions
Energy Technology Data Exchange (ETDEWEB)
Blatman, Geraud, E-mail: geraud.blatman@edf.f [Clermont Universite, IFMA, EA 3867, Laboratoire de Mecanique et Ingenieries, BP 10448, F-63000 Clermont-Ferrand (France); EDF, R and D Division - Site des Renardieres, F-77818 Moret-sur-Loing (France); Sudret, Bruno, E-mail: sudret@phimeca.co [Clermont Universite, IFMA, EA 3867, Laboratoire de Mecanique et Ingenieries, BP 10448, F-63000 Clermont-Ferrand (France); Phimeca Engineering, Centre d' Affaires du Zenith, 34 rue de Sarlieve, F-63800 Cournon d' Auvergne (France)
2010-11-15
Global sensitivity analysis aims at quantifying the relative importance of uncertain input variables onto the response of a mathematical model of a physical system. ANOVA-based indices such as the Sobol' indices are well-known in this context. These indices are usually computed by direct Monte Carlo or quasi-Monte Carlo simulation, which may reveal hardly applicable for computationally demanding industrial models. In the present paper, sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices. An adaptive algorithm allows the analyst to build up a PC-based metamodel that only contains the significant terms whereas the PC coefficients are computed by least-square regression using a computer experimental design. The accuracy of the metamodel is assessed by leave-one-out cross validation. Due to the genuine orthogonality properties of the PC basis, ANOVA-based sensitivity indices are post-processed analytically. This paper also develops a bootstrap technique which eventually yields confidence intervals on the results. The approach is illustrated on various application examples up to 21 stochastic dimensions. Accurate results are obtained at a computational cost 2-3 orders of magnitude smaller than that associated with Monte Carlo simulation.
Dulaney, Malik H.
2013-01-01
Emerging technologies challenge the management of information technology in organizations. Paradigm changing technologies, such as cloud computing, have the ability to reverse the norms in organizational management, decision making, and information technology governance. This study explores the effects of cloud computing on information technology…
Poder, Thomas G; Godbout, Sylvie T; Bellemare, Christian
This paper describes a comparative study of clinical coding by Archivists (also known as Clinical Coders in some other countries) using single and dual computer monitors. In the present context, processing a record corresponds to checking the available information; searching for the missing physician information; and finally, performing clinical coding. We collected data for each Archivist during her use of the single monitor for 40 hours and during her use of the dual monitor for 20 hours. During the experimental periods, Archivists did not perform other related duties, so we were able to measure the real-time processing of records. To control for the type of records and their impact on the process time required, we categorised the cases as major or minor, based on whether acute care or day surgery was involved. Overall results show that 1,234 records were processed using a single monitor and 647 records using a dual monitor. The time required to process a record was significantly higher (p= .071) with a single monitor compared to a dual monitor (19.83 vs.18.73 minutes). However, the percentage of major cases was significantly higher (p= .000) in the single monitor group compared to the dual monitor group (78% vs. 69%). As a consequence, we adjusted our results, which reduced the difference in time required to process a record between the two systems from 1.1 to 0.61 minutes. Thus, the net real-time difference was only 37 seconds in favour of the dual monitor system. Extrapolated over a 5-year period, this would represent a time savings of 3.1% and generate a net cost savings of $7,729 CAD (Canadian dollars) for each workstation that devoted 35 hours per week to the processing of records. Finally, satisfaction questionnaire responses indicated a high level of satisfaction and support for the dual-monitor system. The implementation of a dual-monitor system in a hospital archiving department is an efficient option in the context of scarce human resources and has the
Directory of Open Access Journals (Sweden)
Faramarz eFaghihi
2015-03-01
Full Text Available Information processing in the hippocampus begins by transferring spiking activity of the Entorhinal Cortex (EC into the Dentate Gyrus (DG. Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modelled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of neuron in the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking. This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed.
Faghihi, Faramarz; Moustafa, Ahmed A.
2015-01-01
Information processing in the hippocampus begins by transferring spiking activity of the entorhinal cortex (EC) into the dentate gyrus (DG). Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modeled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of granule cells of the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking). This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed. PMID:25859189
Four shells atomic model to computer the counting efficiency of electron-capture nuclides
International Nuclear Information System (INIS)
Grau Malonda, A.; Fernandez Martinez, A.
1985-01-01
The present paper develops a four-shells atomic model in order to obtain the efficiency of detection in liquid scintillation courting, Mathematical expressions are given to calculate the probabilities of the 229 different atomic rearrangements so as the corresponding effective energies. This new model will permit the study of the influence of the different parameters upon the counting efficiency for nuclides of high atomic number. (Author) 7 refs
International Nuclear Information System (INIS)
Smolinska, M.; Cik, G.; Sersen, F.; Caplovicova, M.; Takacova, A.; Kopani, M.
2015-01-01
The composite system can be prepared by incorporation of methylene blue into the channels of zeolite and by adsorption on the surface of the crystals. The composite photo sensitizer effectively absorbs the red light (kmax = 648 nm) and upon illumination with light-emitting diode at a fluence rate of 1.02 mW cm-2 generates effectively reactive singlet oxygen in aqueous solution, which was proved by EPR spectroscopy. To test efficiency for inactivation of pathogenic microorganisms, we measured photo killing of bacteria Escherichia coli and Staphylococcus aureus and yeasts Candida albicans. We found out that after the microorganisms have been adsorbed at the surface of such modified zeolite, the photo generated singlet oxygen quickly penetrates their cell walls, bringing about their effective photo inactivation. The growth inhibition reached almost 50 % at 200 and 400 mg modified zeolite in 1 ml of medium in E. coli and C. albicans, respectively. On the other hand, the growth inhibition of S. aureus reached 50 % at far smaller amount of photo catalyst (30 lg per 1 ml of medium). These results demonstrate differences in sensitivities of bacteria and yeast growth. The comparison revealed that concentration required for IC50 was in case of C. albicans several orders of magnitude lower for a zeolite-immobilized dye than it was for a freely dissolved dye. In S. aureus, this concentration was even lower by four orders of magnitude. Thus, our work suggested a new possibility to exploitation of zeolite and methylene blue in the protection of biologically contaminated environment, and in photodynamic therapy.
International Nuclear Information System (INIS)
Wijaya, Muhammad Ery; Tezuka, Tetsuo
2013-01-01
Highlights: ► We observe human psychosocial variables regarding purchase of electrical appliances. ► Two cities with different cultures are subject of this study – Bandung and Yogyakarta. ► Differences in the lifetime of appliances can be attributed to the cultural. ► Ads and store’s staff have the greatest impact on people’s choice of appliances. ► Adoption of higher-efficiency appliances could be implemented based on each culture. - Abstract: One approach to decreasing electricity consumption is to facilitate the replacement of older appliances with new, higher-efficiency. The objectives of this paper are to compare and analyse the replacement of appliances in two cities of Indonesia – Yogyakarta and Bandung – that are characterised by different cultural backgrounds, ethnicities, and decision-making processes in the household purchase of electrical appliances. A questionnaire survey method was employed to obtain information on behavioural economics and human psychosocial variables such as attitudes, beliefs and perceived benefits regarding the replacement and purchase of electrical appliances. The results show that refrigerators in Yogyakarta have a longer lifetime than in Bandung. However, in Bandung, air conditioners, electric fans, rice cookers, and water pumps have a longer lifetime than in Yogyakarta. These differences in the lifetime of appliances can be attributed to the cultural differences within the two cities that are reflected in the manner in which people use electrical appliances as well as to their lack of knowledge regarding appliance operation. An analysis of the factors influencing the purchase of appliances indicated that people in Yogyakarta show a greater awareness of the benefits of adopting higher-efficiency appliances than do persons in Bandung. The following suggestions could be implemented to improve the strategy of encouraging the adoption of higher-efficiency appliances: (1) in Yogyakarta, energy labelling could be
International Nuclear Information System (INIS)
Su, Jianye; Xu, Min; Li, Tie; Gao, Yi; Wang, Jiasheng
2014-01-01
Highlights: • Experiments for the effects of cooled EGR and two compression ratios (CR) on fuel efficiency were conducted. • The mechanism for the observed fuel efficiency behaviors by cooled EGR and high CR was clarified. • Cooled EGR offers more fuel efficiency improvement than elevating CR from 9.3 to 10.9. • Combining 18–25% cooled EGR with 10.9 CR lead to 2.1–3.5% brake thermal efficiency improvements. - Abstract: The downsized boosted spark-ignition direct-injection (SIDI) engine has proven to be one of the most promising concepts to improve vehicle fuel economy. However, the boosted engine is typically designed at a lower geometric compression ratio (CR) due to the increased knock tendency in comparison to naturally aspirated engines, limiting the potential of improving fuel economy. On the other hand, cooled exhaust gas recirculation (EGR) has drawn attention due to the potential to suppress knock and improve fuel economy. Combing the effects of boosting, increased CR and cooled EGR to further improve fuel economy within acceptable knock tolerance has been investigated using a 2.0 L downsized boosted SIDI engine over a wide range of engine operating conditions from 1000 rpm to 3000 rpm at low to high loads. To clarify the mechanism of this complicated effects, the first law of thermodynamics analysis was conducted with the inputs from GT-Power® engine simulation. Experiment results indicate that cooled EGR provides more brake thermal efficiency improvement than increasing geometric CR from 9.3 to 10.9. The benefit of brake thermal efficiency from the higher CR is limited to low load conditions. The attributes for improving brake thermal efficiency by cooled EGR include reduced heat transfer loss, reduced pumping work and increased ratio of specific heats for all the engine operating conditions, as well as higher degree of constant volume heat release only for the knock-limited high load conditions. The combined effects of 18–25% cooled EGR
Computer-aided modeling for efficient and innovative product-process engineering
DEFF Research Database (Denmark)
Heitzig, Martina
Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy and water. This trend is set to continue due to the substantial benefits computer...... in chemical and biochemical engineering have been solved to illustrate the application of the generic modelling methodology, the computeraided modelling framework and the developed software tool.......-aided methods provide. The key prerequisite of computer-aided productprocess engineering is however the availability of models of different types, forms and application modes. The development of the models required for the systems under investigation tends to be a challenging, time-consuming and therefore cost...
[Efficiency of computer-based documentation in long-term care--preliminary project].
Lüngen, Markus; Gerber, Andreas; Rupprecht, Christoph; Lauterbach, Karl W
2008-06-01
In Germany the documentation of processes in long-term care is mainly paper-based. Planning, realization and evaluation are not supported in an optimal way. In a preliminary study we evaluated the consequences of the introduction of a computer-based documentation system using handheld devices. We interviewed 16 persons before and after introducing the computer-based documentation and assessed costs for the documentation process and administration. The results show that reducing costs is likely. The job satisfaction of the personnel increased, more time could be spent for caring for the residents. We suggest further research to reach conclusive results.
FROM CELLULAR NETWORKS TO MOBILE CLOUD COMPUTING: SECURITY AND EFFICIENCY OF SMARTPHONE SYSTEMS
BARBERA, MARCO VALERIO
2013-01-01
In my ﬁrst year of my Computer Science degree, if somebody had told me that the few years ahead of me could have been the last ones of the so-called PC-era, I would have hardly believed him. Sure, I could imagine computers becoming smaller, faster and cheaper, but I could have never imagined that in such a short time the focus of the market would have so dramatically shifted from PCs to personal devices. Today, smartphones and tablets have become our inseparable companions, changing for the b...
A systematic and efficient method to compute multi-loop master integrals
Directory of Open Access Journals (Sweden)
Xiao Liu
2018-04-01
Full Text Available We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.
Efficient method for computing the electronic transport properties of a multiterminal system
Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio
2018-04-01
We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.
A systematic and efficient method to compute multi-loop master integrals
Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu
2018-04-01
We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
Matthias Kasemann
Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...
Directory of Open Access Journals (Sweden)
Wasiak Andrzej
2017-01-01
Full Text Available Energetic efficiency of biofuel production systems, as well as that of other fuels production systems, can be evaluated on the basis of modified EROEI indicator. In earlier papers, a new definition of the EROEI indicator was introduced. This approach enables the determination of this indicator separately for individual subsystems of a chosen production system, and therefore enables the studies of the influence of every subsystem on the energetic efficiency of the system as a whole. The method has been applied to the analysis of interactions between agricultural, internal transport subsystems, as well as preliminary studies of the effect of industrial subsystem.