WorldWideScience

Sample records for high computational cost

  1. Computer Vision Approach for Low Cost, High Precision Measurement of Grapevine Trunk Diameter in Outdoor Conditions

    OpenAIRE

    Pérez, Diego Sebastián; Bromberg, Facundo; Antivilo, Francisco Gonzalez

    2014-01-01

    Trunk diameter is a variable of agricultural interest, used mainly in the prediction of fruit trees production. It is correlated with leaf area and biomass of trees, and consequently gives a good estimate of the potential production of the plants. This work presents a low cost, high precision method for the measurement of trunk diameter of grapevines based on Computer Vision techniques. Several methods based on Computer Vision and other techniques are introduced in the literature. These metho...

  2. Low-cost, high-performance and efficiency computational photometer design

    Science.gov (United States)

    Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly

    2014-05-01

    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.

  3. Commodity CPU-GPU System for Low-Cost , High-Performance Computing

    Science.gov (United States)

    Wang, S.; Zhang, S.; Weiss, R. M.; Barnett, G. A.; Yuen, D. A.

    2009-12-01

    We have put together a desktop computer system for under 2.5 K dollars from commodity components that consist of one quad-core CPU (Intel Core 2 Quad Q6600 Kentsfield 2.4GHz) and two high end GPUs (nVidia's GeForce GTX 295 and Tesla C1060). A 1200 watt power supply is required. On this commodity system, we have constructed an easy-to-use hybrid computing environment, in which Message Passing Interface (MPI) is used for managing the working loads, for transferring the data among different GPU devices, and for minimizing the need of CPU’s memory. The test runs using the MAGMA (Matrix Algebra on GPU and Multicore Architectures) library show that the speed ups for double precision calculations can be greater than 10 (GPU vs. CPU) and they are bigger (> 20) for single precision calculations. In addition we have enabled the combination of Matlab with CUDA for interactive visualization through MPI, i.e., two GPU devices are used for simulation and one GPU device is used for visualizing the computing results as the simulation goes. Our experience with this commodity system has shown that running multiple applications on one GPU device or running one application across multiple GPU devices can be done as conveniently as on CPUs. With NVIDIA CEO Jen-Hsun Huang's claim that over the next 6 years GPU processing power will increase by 570x compared to the 3x for CPUs, future low-cost commodity computers such as ours may be a remedy for the long wait queues of the world's supercomputers, especially for small- and mid-scale computation. Our goal here is to explore the limits and capabilities of this emerging technology and to get ourselves ready to run large-scale simulations on the next generation of computing environment, which we believe will hybridize CPU and GPU architectures.

  4. Low cost, highly effective parallel computing achieved through a Beowulf cluster.

    Science.gov (United States)

    Bitner, Marc; Skelton, Gordon

    2003-01-01

    A Beowulf cluster is a means of bringing together several computers and using software and network components to make this cluster of computers appear and function as one computer with multiple parallel computing processors. A cluster of computers can provide comparable computing power usually found only in very expensive super computers or servers.

  5. Cost effectiveness of high resolution computed tomography with interferon-gamma release assay for tuberculosis contact investigation

    Energy Technology Data Exchange (ETDEWEB)

    Kowada, Akiko, E-mail: kowadaa@gmail.com [Kojiya Haneda Healthcare Service, Ota City Public Health Office, Tokyo (Japan)

    2013-08-15

    Background: Tuberculosis contact investigation is one of the important public health strategies to control tuberculosis worldwide. Recently, high resolution computed tomography (HRCT) has been reported as a more accurate radiological method with higher sensitivity and specificity than chest X-ray (CXR) to detect active tuberculosis. In this study, we assessed the cost effectiveness of HRCT compared to CXR in combination with QuantiFERON{sup ®}-TB Gold In-Tube (QFT) or the tuberculin skin test (TST) for tuberculosis contact investigation. Methods: We constructed Markov models using a societal perspective on the lifetime horizon. The target population was a hypothetical cohort of immunocompetent 20-year-old contacts with smear-positive tuberculosis patients in developed countries. Six strategies; QFT followed by CXR, QFT followed by HRCT, TST followed by CXR, TST followed by HRCT, CXR alone and HRCT alone were modeled. All costs and clinical benefits were discounted at a fixed annual rate of 3%. Results: In the base-case analysis, QFT followed by HRCT strategy yielded the greatest benefit at the lowest cost ($US 6308.65; 27.56045 quality-adjusted life-years [QALYs])[year 2012 values]. Cost-effectiveness was sensitive to BCG vaccination rate. Conclusions: The QFT followed by HRCT strategy yielded the greatest benefits at the lowest cost. HRCT chest imaging, instead of CXR, is recommended as a cost effective addition to the evaluation and management of tuberculosis contacts in public health policy.

  6. Game Theory with Costly Computation

    CERN Document Server

    Halpern, Joseph Y

    2008-01-01

    We develop a general game-theoretic framework for reasoning about strategic agents performing possibly costly computation. In this framework, many traditional game-theoretic results (such as the existence of a Nash equilibrium) no longer hold. Nevertheless, we can use the framework to provide psychologically appealing explanations to observed behavior in well-studied games (such as finitely repeated prisoner's dilemma and rock-paper-scissors). Furthermore, we provide natural conditions on games sufficient to guarantee that equilibria exist. As an application of this framework, we consider a notion of game-theoretic implementation of mediators in computational games. We show that a special case of this notion is equivalent to a variant of the traditional cryptographic definition of protocol security; this result shows that, when taking computation into account, the two approaches used for dealing with "deviating" players in two different communities -- Nash equilibrium in game theory and zero-knowledge "simula...

  7. Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics.

    Science.gov (United States)

    Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo

    2016-01-01

    The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.

  8. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  9. Cost Considerations in Cloud Computing

    Science.gov (United States)

    2014-01-01

    development of a range of new distributed file systems and data- bases that have better scalability properties than traditional SQL databases. Hadoop ...data. Many systems exist that extend or supplement Hadoop —such as Apache Accumulo, which provides a highly granular mechanism for managing security...Accumulo database, when implemented on Hadoop , has a data ingestion rate significantly higher than that provided by Oracle. However, it should be

  10. Charging for computer usage with average cost pricing

    CERN Document Server

    Landau, K

    1973-01-01

    This preliminary report, which is mainly directed to commercial computer centres, gives an introduction to the application of average cost pricing when charging for using computer resources. A description of the cost structure of a computer installation shows advantages and disadvantages of average cost pricing. This is completed by a discussion of the different charging-rates which are possible. (10 refs).

  11. Cost-Benefit Analysis of Computer Resources for Machine Learning

    Science.gov (United States)

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  12. An Optimal Solution of Resource Provisioning Cost in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Arun Pandian

    2013-03-01

    Full Text Available In cloud computing, providing an optimal resource to user becomes more and more important. Cloud computing users can access the pool of computing resources through internet. Cloud providers are charge for these computing resources based on cloud resource usage. The provided resource plans are reservation and on demand. The computing resources are provisioned by cloud resource provisioning model. In this model resource cost is high due to the difficulty in optimization of resource cost under uncertainty. The resource optimization cost is dealing with an uncertainty of resource provisioning cost. The uncertainty of resource provisioning cost consists: on demand cost, Reservation cost, Expending cost. This problem leads difficulty to achieve optimal solution of resource provisioning cost in cloud computing. The Stochastic Integer Programming is applied for difficulty to obtain optimal resource provisioning cost. The Two Stage Stochastic Integer Programming with recourse is applied to solve the complexity of optimization problems under uncertainty. The stochastic programming is enhanced as Deterministic Equivalent Formulation for solve the probability distribution of all scenarios to reduce the on demand cost. The Benders Decomposition is applied for break down the resource optimization problem into multiple sub problems to reduce the on demand cost and reservation cost. The Sample Average Approximation is applied for reduce the problem scenarios in a resource optimization problem. This algorithm is used to reduce the reservation cost and expending cost.

  13. CONSTRUCTION COST INTEGRATED CONTROL BASED ON COMPUTER SIMULATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Construction cost control is a complex system engineering. Thetraditional controlling method cannot dynamically control in advance the construction cost because of its hysteresis. This paper proposes a computer simulation based construction cost integrated control method, which combines the cost with PERT systematically, so that the construction cost can be predicted and optimized systematically and effectively. The new method overcomes the hysteresis of the traditional systems, and is a distinct improvement over them in effect and practicality.

  14. Cutting Technology Costs with Refurbished Computers

    Science.gov (United States)

    Dessoff, Alan

    2010-01-01

    Many district administrators are finding that they can save money on computers by buying preowned ones instead of new ones. The practice has other benefits as well: It allows districts to give more computers to more students who need them, and it also promotes good environmental practices by keeping the machines out of landfills, where they…

  15. Cloud Computing and Information Technology Resource Cost Management for SMEs

    DEFF Research Database (Denmark)

    Kuada, Eric; Adanu, Kwame; Olesen, Henning

    2013-01-01

    This paper analyzes the decision-making problem confronting SMEs considering the adoption of cloud computing as an alternative to in-house computing services provision. The economics of choosing between in-house computing and a cloud alternative is analyzed by comparing the total economic costs o...

  16. Building Low Cost Cloud Computing Systems

    Directory of Open Access Journals (Sweden)

    Carlos Antunes

    2013-06-01

    Full Text Available The actual models of cloud computing are based in megalomaniac hardware solutions, being its implementation and maintenance unaffordable to the majority of service providers. The use of jail services is an alternative to current models of cloud computing based on virtualization. Models based in utilization of jail environments instead of the used virtualization systems will provide huge gains in terms of optimization of hardware resources at computation level and in terms of storage and energy consumption. In this paper it will be addressed the practical implementation of jail environments in real scenarios, which allows the visualization of areas where its application will be relevant and will make inevitable the redefinition of the models that are currently defined for cloud computing. In addition it will bring new opportunities in the development of support features for jail environments in the majority of operating systems.

  17. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  18. Current Cloud Computing Review and Cost Optimization by DERSP

    Directory of Open Access Journals (Sweden)

    M. Gomathy

    2014-03-01

    Full Text Available Cloud computing promises to deliver cost saving through the “pay as you use” paradigm. The focus is on adding computing resources when needed and releasing them when the need is serviced. Since cloud computing relies on providing computing power through multiple interconnected computers, there is a paradigm shift from one large machine to a combination of multiple smaller machine instances. In this paper, we review the current cloud computing scenario and provide a set of recommendations that can be used for designing custom applications suited for cloud deployment. We also present a comparative study on the change in cost incurred while using different combinations of machine instances for running an application on cloud; and derive the case for optimal cost

  19. Cost-effectiveness analysis in markets with high fixed costs.

    Science.gov (United States)

    Cutler, David M; Ericson, Keith M Marzilli

    2010-01-01

    We consider how to conduct cost-effectiveness analysis when the social cost of a resource differs from the posted price. From the social perspective, the true cost of a medical intervention is the marginal cost of delivering another unit of a treatment, plus the social cost (deadweight loss) of raising the revenue to fund the treatment. We focus on pharmaceutical prices, which have high markups over marginal cost due to the monopoly power granted to pharmaceutical companies when drugs are under patent. We find that the social cost of a branded drug is approximately one-half the market price when the treatment is paid for by a public insurance plan and one-third the market price for mandated coverage by private insurance. We illustrate the importance of correctly accounting for social costs using two examples: coverage for statin drugs and approval for a drug to treat kidney cancer (sorafenib). In each case, we show that the correct social perspective for cost-effectiveness analysis would be more lenient than researcher recommendations.

  20. High Performance Computing Today

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Meuer,Hans; Simon,Horst D.; Strohmaier,Erich

    2000-04-01

    In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance of various computers of the last 5 decades in Figure 1 that could have been called the supercomputers of their time they indeed see how well this law holds for almost the complete lifespan of modern computing. On average they see an increase in performance of two magnitudes of order every decade.

  1. Intelligent Cost Modeling Based on Soft Computing for Avionics Systems

    Institute of Scientific and Technical Information of China (English)

    ZHU Li-li; LI Zhuang-sheng; XU Zong-ze

    2006-01-01

    In parametric cost estimating, objections to using statistical Cost Estimating Relationships (CERs) and parametric models include problems of low statistical significance due to limited data points, biases in the underlying data, and lack of robustness. Soft Computing (SC) technologies are used for building intelligent cost models. The SC models are systemically evaluated based on their training and prediction of the historical cost data of airborne avionics systems. Results indicating the strengths and weakness of each model are presented. In general, the intelligent cost models have higher prediction precision, better data adaptability, and stronger self-learning capability than the regression CERs.

  2. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    Science.gov (United States)

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services.

  3. Thermodynamic cost of computation, algorithmic complexity and the information metric

    Science.gov (United States)

    Zurek, W. H.

    1989-01-01

    Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.

  4. Dawning4000A high performance computer

    Institute of Scientific and Technical Information of China (English)

    SUN Ninghui; MENG Dan

    2007-01-01

    Dawning4000A is an AMD Opteron-based Linux Cluster with 11.2Tflops peak performance and 8.06Tflops Linpack performance.It was developed for the Shanghai Supercomputer Center (SSC)as one of the computing power stations of the China National Grid (CNGrid)project.The Massively Cluster Computer (MCC)architecture is proposed to put added-value on the industry standard system.Several grid-enabling components are developed to support the running environment of the CNGrid.It is an achievement for a high performance computer with the low-cost approach.

  5. High assurance services computing

    CERN Document Server

    2009-01-01

    Covers service-oriented technologies in different domains including high assurance systemsAssists software engineers from industry and government laboratories who develop mission-critical software, and simultaneously provides academia with a practitioner's outlook on the problems of high-assurance software development

  6. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  7. Cost Optimization Using Hybrid Evolutionary Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    B. Kavitha

    2015-07-01

    Full Text Available The main aim of this research is to design the hybrid evolutionary algorithm for minimizing multiple problems of dynamic resource allocation in cloud computing. The resource allocation is one of the big problems in the distributed systems when the client wants to decrease the cost for the resource allocation for their task. In order to assign the resource for the task, the client must consider the monetary cost and computational cost. Allocation of resources by considering those two costs is difficult. To solve this problem in this study, we make the main task of client into many subtasks and we allocate resources for each subtask instead of selecting the single resource for the main task. The allocation of resources for the each subtask is completed through our proposed hybrid optimization algorithm. Here, we hybrid the Binary Particle Swarm Optimization (BPSO and Binary Cuckoo Search algorithm (BCSO by considering monetary cost and computational cost which helps to minimize the cost of the client. Finally, the experimentation is carried out and our proposed hybrid algorithm is compared with BPSO and BCSO algorithms. Also we proved the efficiency of our proposed hybrid optimization algorithm.

  8. Computer technology in oil refining: cost or benefit

    Energy Technology Data Exchange (ETDEWEB)

    Payne, B. (KBC Process Technology (GB))

    1990-04-01

    There is undoubtedly a commitment in the oil refining industry to computerise wherever possible, and to develop advanced mathematical modelling techniques to improved profitability. However, many oil refiners are now asking themselves whether computer solutions are a cost, or are truly a benefit to their organisation. Problems have been caused by distributed computing running out of control in many organisations. This has been partly brought to reign recently, by advanced networking of PCs along with mainframe facilities, and development of management information systems with common data bases for all users to build their applications on. Implementation of information technology strategies helped many refiners to plan the way ahead for the future. The use of computers across the refining sector in the current marketplace is reviewed. The conclusion drawn is that although computer technology is a cost it can also be ranked as a significant benefit and success in the refining industry at present. (author).

  9. Software Requirements for a System to Compute Mean Failure Cost

    Energy Technology Data Exchange (ETDEWEB)

    Aissa, Anis Ben [University of Tunis, Belvedere, Tunisia; Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL; Mili, Ali [New Jersey Insitute of Technology

    2010-01-01

    In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder. We also demonstrated this infrastructure through the results of security breakdowns for the ecommerce case. In this paper, we illustrate this infrastructure by an application that supports the computation of the Mean Failure Cost (MFC) for each stakeholder.

  10. Cost-effectiveness analysis of computer-based assessment

    Directory of Open Access Journals (Sweden)

    Pauline Loewenberger

    2003-12-01

    Full Text Available The need for more cost-effective and pedagogically acceptable combinations of teaching and learning methods to sustain increasing student numbers means that the use of innovative methods, using technology, is accelerating. There is an expectation that economies of scale might provide greater cost-effectiveness whilst also enhancing student learning. The difficulties and complexities of these expectations are considered in this paper, which explores the challenges faced by those wishing to evaluate the costeffectiveness of computer-based assessment (CBA. The paper outlines the outcomes of a survey which attempted to gather information about the costs and benefits of CBA.

  11. Computer-generated fiscal reports for food cost accounting.

    Science.gov (United States)

    Fromm, B; Moore, A N; Hoover, L W

    1980-08-01

    To optimize resource utilization for the provision of health-care services, well designed food cost accounting systems should facilitate effective decision-making. Fiscal reports reflecting the financial status of an organization at a given time must be current and representative so that managers have adequate data for planning and controlling. The computer-assisted food cost accounting discussed in this article can be integrated with other sub-systems and operations management techniques to provide the information needed to make decisions regarding revenues and expenses. Management information systems must be routinely evaluated and updated to meet the current needs of administrators. Further improvements in the food cost accounting system will be desirable whenever substantial changes occur within the foodservice operation at the University of Missouri-Columbia Medical Center or when advancements in computer technology provide more efficient methods for manipulating data and generating reports. Development of new systems and better applications of present systems could contribute significantly to the efficiency of operations in both health care and commercial foodservices. The computer-assisted food cost accounting system reported here might serve s a prototype for other management cost information systems.

  12. Cost-effectiveness of PET and PET/Computed Tomography

    DEFF Research Database (Denmark)

    Gerke, Oke; Hermansson, Ronnie; Hess, Søren

    2015-01-01

    measure by means of incremental cost-effectiveness ratios when considering the replacement of the standard regimen by a new diagnostic procedure. This article discusses economic assessments of PET and PET/computed tomography reported until mid-July 2014. Forty-seven studies on cancer and noncancer...

  13. Factors Affecting Computer Anxiety in High School Computer Science Students.

    Science.gov (United States)

    Hayek, Linda M.; Stephens, Larry

    1989-01-01

    Examines factors related to computer anxiety measured by the Computer Anxiety Index (CAIN). Achievement in two programing courses was inversely related to computer anxiety. Students who had a home computer and had computer experience before high school had lower computer anxiety than those who had not. Lists 14 references. (YP)

  14. User manual for PACTOLUS: a code for computing power costs.

    Energy Technology Data Exchange (ETDEWEB)

    Huber, H.D.; Bloomster, C.H.

    1979-02-01

    PACTOLUS is a computer code for calculating the cost of generating electricity. Through appropriate definition of the input data, PACTOLUS can calculate the cost of generating electricity from a wide variety of power plants, including nuclear, fossil, geothermal, solar, and other types of advanced energy systems. The purpose of PACTOLUS is to develop cash flows and calculate the unit busbar power cost (mills/kWh) over the entire life of a power plant. The cash flow information is calculated by two principal models: the Fuel Model and the Discounted Cash Flow Model. The Fuel Model is an engineering cost model which calculates the cash flow for the fuel cycle costs over the project lifetime based on input data defining the fuel material requirements, the unit costs of fuel materials and processes, the process lead and lag times, and the schedule of the capacity factor for the plant. For nuclear plants, the Fuel Model calculates the cash flow for the entire nuclear fuel cycle. For fossil plants, the Fuel Model calculates the cash flow for the fossil fuel purchases. The Discounted Cash Flow Model combines the fuel costs generated by the Fuel Model with input data on the capital costs, capital structure, licensing time, construction time, rates of return on capital, tax rates, operating costs, and depreciation method of the plant to calculate the cash flow for the entire lifetime of the project. The financial and tax structure for both investor-owned utilities and municipal utilities can be simulated through varying the rates of return on equity and debt, the debt-equity ratios, and tax rates. The Discounted Cash Flow Model uses the principal that the present worth of the revenues will be equal to the present worth of the expenses including the return on investment over the economic life of the project. This manual explains how to prepare the input data, execute cases, and interpret the output results. (RWR)

  15. A Low Computation Cost Key Management Solution to MANET

    Institute of Scientific and Technical Information of China (English)

    WANG Shun-man; TAO Ran; WANG Yue; XU Kai; WANG Zhan-lu

    2006-01-01

    After concisely analyzing the typical group key agreement protocols of centralized key distribution (CKD),Diffie-Hellman (DH) and group Diffie-Hellman (GDH), tree-based group Diffie-Hellman (TGDH), the paper gives out a new key management method by threshold cryptography and tree-based structure, which is proved to gain advantages in computation cost greatly and improves the MANET key management performance accordingly.

  16. Homeownership in a high-cost region

    OpenAIRE

    Esther Schlorholtz

    2006-01-01

    A perfect storm is brewing in eastern Massachusetts: high home prices, rising interest rates, and a proliferation of high-cost mortgage products. More buyer education and better state regulation of lenders not covered by the Community Reinvestment Act are needed.

  17. High-performance computers for unmanned vehicles

    Science.gov (United States)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  18. High-performance Scientific Computing using Parallel Computing to Improve Performance Optimization Problems

    Directory of Open Access Journals (Sweden)

    Florica Novăcescu

    2011-10-01

    Full Text Available HPC (High Performance Computing has become essential for the acceleration of innovation and the companies’ assistance in creating new inventions, better models and more reliable products as well as obtaining processes and services at low costs. The information in this paper focuses particularly on: description the field of high performance scientific computing, parallel computing, scientific computing, parallel computers, and trends in the HPC field, presented here reveal important new directions toward the realization of a high performance computational society. The practical part of the work is an example of use of the HPC tool to accelerate solving an electrostatic optimization problem using the Parallel Computing Toolbox that allows solving computational and data-intensive problems using MATLAB and Simulink on multicore and multiprocessor computers.

  19. 周期子波在二维声辐射和声散射中的应用%On Reducing High Computational Cost with Periodic Wavelets in Solving Two-Dimensional Acoustic Radiation and Scattering

    Institute of Scientific and Technical Information of China (English)

    文立华; 张京妹; 孙进才

    2001-01-01

    Traditional methods for solving acoustic problems in engineering often require the solution of non-symmetric full matrix, whose dimension may be even higher than 10 000 and thus computational cost becomes quite high. To overcome this serious shortcoming, we propose a new periodic wavelet approach for the Helmholtz integral-equation solution of two-dimensional acoustic radiation and scattering over curved computation domain. We expand the boundary quantities in terms of periodic and orthogonal wavelets and we obtain the algebraic equations needed for solving the acoustic problems with Dirichlet, Neumann and mixed conditions. We evaluate the coefficients with fast wavelet transform. The advantage of the new approach is a highly sparse matrix system. We compare the numerical results obtained with our new approach, boundary element method or analytical solutions; the numerical results, as given in Table 1, show that our new approach converges rapidly and is of good accuracy.%提出了一种新的求解二维Helmholtz积分方程的方法。它通过将边界量用周期子波展开,将Helmholtz积分方程化为一组代数方程求解。即可求解Dirichlet、Neumann问题,也可求解混合边值问题。方程的系数形成可用快速子波变换。用该方法形成的Helmholtz积分方程的系数矩阵是一稀疏矩阵。这样大大提高了计算效率。本文算例表明:该方法收敛快,精度高,相同的精度下,本文方法求解的未知量大大少于边界元所用未知量。

  20. Computer Controlled High Precise,High Voltage Pules Generator

    Institute of Scientific and Technical Information of China (English)

    但果; 邹积岩; 丛吉远; 董恩源

    2003-01-01

    High precise, high voltage pulse generator made up of high-power IGBT and pulse transformers controlled by a computer are described. A simple main circuit topology employed in this pulse generator can reduce the cost meanwhile it still meets special requirements for pulsed electric fields (PEFs) in food process. The pulse generator utilizes a complex programmable logic device (CPLD) to generate trigger signals. Pulse-frequency, pulse-width and pulse-number are controlled via RS232 bus by a computer. The high voltage pulse generator well suits to the application for fluid food non-thermal effect in pulsed electric fields, for it can increase and decrease by the step length 1.

  1. Cost Optimization of Cloud Computing Services in a Networked Environment

    Directory of Open Access Journals (Sweden)

    Eli WEINTRAUB

    2015-04-01

    Full Text Available Cloud computing service providers' offer their customers' services maximizing their revenues, whereas customers wish to minimize their costs. In this paper we shall concentrate on consumers' point of view. Cloud computing services are composed of services organized according to a hierarchy of software application services, beneath them platform services which also use infrastructure services. Providers currently offer software services as bundles consisting of services which include the software, platform and infrastructure services. Providers also offer platform services bundled with infrastructure services. Bundling services prevent customers from splitting their service purchases between a provider of software and a different provider of the underlying platform or infrastructure. This bundling policy is likely to change in the long run since it contradicts economic competition theory, causing an unfair pricing model and locking-in consumers to specific service providers. In this paper we assume the existence of a free competitive market, in which consumers are free to switch their services among providers. We assume that free market competition will enforce vendors to adopt open standards, improve the quality of their services and suggest a large variety of cloud services in all layers. Our model is aimed at the potential customer who wishes to find the optimal combination of service providers which minimizes his costs. We propose three possible strategies for implementation of the model in organizations. We formulate the mathematical model and illustrate its advantages compared to existing pricing practices used by cloud computing consumers.

  2. Verification of Cost Estimating Procedures for MAPS Computer Program.

    Science.gov (United States)

    1982-05-01

    gpd/ft Filtration Type media: Dual Loading rate: 5 gpm/ft Chlorination Storage: Tank cars Ammonia Powdered carbon Clearwell 2 to 25 mg belowground...cost functions for conventional treatment MAPS water treatment module does not account for intakes, sludge han- dling, clearwells , high service

  3. Costs evaluation methodic of energy efficient computer network reengineering

    Directory of Open Access Journals (Sweden)

    S.A. Nesterenko

    2016-09-01

    Full Text Available A key direction of modern computer networks reengineering is their transfer to a new energy-saving technology IEEE 802.3az. To make a reasoned decision about the transition to the new technology is needed a technique that allows network engineers to answer the question about the economic feasibility of a network upgrade. Aim: The aim of this research is development of methodic for calculating the cost-effectiveness of energy-efficient computer network reengineering. Materials and Methods: The methodic uses analytical models for calculating power consumption of a computer network port operating in IEEE 802.3 standard and energy-efficient mode of IEEE 802.3az standard. For frame transmission time calculation in the communication channel used the queuing model. To determine the values of the network operation parameters proposed to use multiagent network monitoring method. Results: The methodic allows calculating the economic impact of a computer network transfer to energy-saving technology IEEE 802.3az. To determine the network performance parameters proposed to use network SNMP monitoring systems based on RMON MIB agents.

  4. Low cost, high performance far infrared microbolometer

    Science.gov (United States)

    Roer, Audun; Lapadatu, Adriana; Elfving, Anders; Kittilsland, Gjermund; Hohler, Erling

    2010-04-01

    Far infrared (FIR) is becoming more widely accepted within the automotive industry as a powerful sensor to detect Vulnerable Road Users like pedestrians and bicyclist as well as animals. The main focus of FIR system development lies in reducing the cost of their components, and this will involve optimizing all aspects of the system. Decreased pixel size, improved 3D process integration technologies and improved manufacturing yields will produce the necessary cost reduction on the sensor to enable high market penetration. The improved 3D process integration allows a higher fill factor and improved transmission/absorption properties. Together with the high Thermal Coefficient of Resistance (TCR) and low 1/f noise properties provided by monocrystalline silicon germanium SiGe thermistor material, they lead to bolometer performances beyond those of existing devices. The thermistor material is deposited and optimized on an IR wafer separated from the read-out integrated circuit (ROIC) wafer. The IR wafer is transferred to the ROIC using CMOS compatible processes and materials, utilizing a low temperature wafer bonding process. Long term vacuum sealing obtained by wafer scale packaging enables further cost reductions and improved quality. The approach allows independent optimization of ROIC and thermistor material processing and is compatible with existing MEMS-foundries, allowing fast time to market.

  5. Philosophy of design for low cost and high reliability

    DEFF Research Database (Denmark)

    Jørgensen, John Leif; Liebe, Carl Christian

    1996-01-01

    The Ørsted Star Imager or Advanced Stellar Compass (ASC), includes the full functionallity of a traditional star tracker plus autonomy, i.e. it is able to quickly and autonomously solve "the lost in space" attitude problem, and determine its attitude with high precision. The design also provides......, Computational speed and Fault detection and recovery substantially. The high performance and low cost design was realized by the use of advanced high level integrated chips, along with a design philosophy of maximum autonomy at all levels. This approach necessitated the use of a prototyping facility which could...... do extensive component testing and screening which addressed the issues of reliability, thermo-mechanical properties, and radiation sensitivity of the commercial IC's. The facility helped to control costs by generating early information on component survival in space. The development philosophy...

  6. A model of costs and benefits of meta-level computation

    NARCIS (Netherlands)

    Harmelen, van F.A.H.

    1994-01-01

    It is well known that meta-computation can be used to guide other computations (at the object-level), and thereby reduce the costs of these computations. However, the question arises to what extent the cost of meta-computation offsetts the gains made by object-level savings. In this paper we discuss

  7. Manual of phosphoric acid fuel cell power plant cost model and computer program

    Science.gov (United States)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.

  8. High Energy Computed Tomographic Inspection of Munitions

    Science.gov (United States)

    2016-11-01

    UNCLASSIFIED UNCLASSIFIED AD-E403 815 Technical Report AREIS-TR-16006 HIGH ENERGY COMPUTED TOMOGRAPHIC INSPECTION OF MUNITIONS...REPORT DATE (DD-MM-YYYY) November 2016 2. REPORT TYPE Final 3. DATES COVERED (From – To) 4. TITLE AND SUBTITLE HIGH ENERGY COMPUTED...otherwise be accomplished by other nondestructive testing methods. 15. SUBJECT TERMS Radiography High energy Computed tomography (CT

  9. Cost-effectiveness of computed tomography coronary angiography versus conventional invasive coronary angiography.

    Science.gov (United States)

    Darlington, Meryl; Gueret, Pascal; Laissy, Jean-Pierre; Pierucci, Antoine Filipovic; Maoulida, Hassani; Quelen, Céline; Niarra, Ralph; Chatellier, Gilles; Durand-Zaleski, Isabelle

    2015-07-01

    To determine the costs and cost-effectiveness of a diagnostic strategy including computed tomography coronary angiography (CTCA) in comparison with invasive conventional coronary angiography (CA) for the detection of significant coronary artery disease from the point of view of the healthcare provider. The average cost per CTCA was determined via a micro-costing method in four French hospitals, and the cost of CA was taken from the 2011 French National Cost Study that collects data at the patient level from a sample of 51 public or not-for-profit hospitals. The average cost of CTCA was estimated to be 180 (95 % CI 162-206) based on the use of a 64-slice CT scanner active for 10 h per day. The average cost of CA was estimated to be 1,378 (95 % CI 1,126-1,670). The incremental cost-effectiveness ratio of CA for all patients over a strategy including CTCA triage in the intermediate risk group, no imaging test in the low risk group, and CA in the high risk group, was estimated to be 6,380 (95 % CI 4,714-8,965) for each additional correctly classified patient. This strategy correctly classifies 95.3 % (95 % CI 94.4-96.2) of all patients in the population studied. A strategy of CTCA triage in the intermediate-risk group, no imaging test in the low-risk group, and CA in the high-risk group, has good diagnostic accuracy and could significantly cut costs. Medium-term and long-term outcomes need to be evaluated in patients with coronary stenosis potentially misclassified by CTCA due to false negative examinations.

  10. High performance computing for beam physics applications

    Science.gov (United States)

    Ryne, R. D.; Habib, S.

    Several countries are now involved in efforts aimed at utilizing accelerator-driven technologies to solve problems of national and international importance. These technologies have both economic and environmental implications. The technologies include waste transmutation, plutonium conversion, neutron production for materials science and biological science research, neutron production for fusion materials testing, fission energy production systems, and tritium production. All of these projects require a high-intensity linear accelerator that operates with extremely low beam loss. This presents a formidable computational challenge: One must design and optimize over a kilometer of complex accelerating structures while taking into account beam loss to an accuracy of 10 parts per billion per meter. Such modeling is essential if one is to have confidence that the accelerator will meet its beam loss requirement, which ultimately affects system reliability, safety and cost. At Los Alamos, the authors are developing a capability to model ultra-low loss accelerators using the CM-5 at the Advanced Computing Laboratory. They are developing PIC, Vlasov/Poisson, and Langevin/Fokker-Planck codes for this purpose. With slight modification, they have also applied their codes to modeling mesoscopic systems and astrophysical systems. In this paper, they will first describe HPC activities in the accelerator community. Then they will discuss the tools they have developed to model classical and quantum evolution equations. Lastly they will describe how these tools have been used to study beam halo in high current, mismatched charged particle beams.

  11. Computing support for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Avery, P.; Yelton, J. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-01

    This computing proposal (Task S) is submitted separately but in support of the High Energy Experiment (CLEO, Fermilab, CMS) and Theory tasks. The authors have built a very strong computing base at Florida over the past 8 years. In fact, computing has been one of the main contributions to their experimental collaborations, involving not just computing capacity for running Monte Carlos and data reduction, but participation in many computing initiatives, industrial partnerships, computing committees and collaborations. These facts justify the submission of a separate computing proposal.

  12. Naval Computer-Based Instruction: Cost, Implementation and Effectiveness Issues.

    Science.gov (United States)

    1988-03-01

    that precipitated change in the way we do computer-based instruction will be pointed out. Perhaps the most well known of all the CBI projects...Electric (GE) has been showing a newer technology than CD-I called Digital Video Interactive ( DVI ). It uses custom chips to compress high quality...been used to great advantage. The Navy will need to look 70 at CD-I and DVI , interactive extensions of the CD-ROM technology, and decide how we can

  13. Grid connected integrated community energy system. Phase II: final state 2 report. Cost benefit analysis, operating costs and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    1978-03-22

    A grid-connected Integrated Community Energy System (ICES) with a coal-burning power plant located on the University of Minnesota campus is planned. The cost benefit analysis performed for this ICES, the cost accounting methods used, and a computer simulation of the operation of the power plant are described. (LCL)

  14. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  15. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  16. A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Directory of Open Access Journals (Sweden)

    Ginés D. Guerrero

    2014-01-01

    Full Text Available Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO. This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor.

  17. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  18. A low-cost, low-energy tangible programming system for computer illiterates in developing regions

    CSIR Research Space (South Africa)

    Smith, Andrew C

    2008-07-01

    Full Text Available We present a low-cost, low-energy technology design that addresses the lack of readily available functional computers for the vast number of computer-illiterate people in developing countries. The tangible programming language presented...

  19. High-Productivity Computing in Computational Physics Education

    Science.gov (United States)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  20. High-performance scientific computing

    CERN Document Server

    Berry, Michael W; Gallopoulos, Efstratios

    2012-01-01

    This book presents the state of the art in parallel numerical algorithms, applications, architectures, and system software. The book examines various solutions for issues of concurrency, scale, energy efficiency, and programmability, which are discussed in the context of a diverse range of applications. Features: includes contributions from an international selection of world-class authorities; examines parallel algorithm-architecture interaction through issues of computational capacity-based codesign and automatic restructuring of programs using compilation techniques; reviews emerging applic

  1. GPU-based high-performance computing for radiation therapy.

    Science.gov (United States)

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B

    2014-02-21

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.

  2. Introduction to High Performance Scientific Computing

    OpenAIRE

    2016-01-01

    The field of high performance scientific computing lies at the crossroads of a number of disciplines and skill sets, and correspondingly, for someone to be successful at using high performance computing in science requires at least elementary knowledge of and skills in all these areas. Computations stem from an application context, so some acquaintance with physics and engineering sciences is desirable. Then, problems in these application areas are typically translated into linear algebraic, ...

  3. China's High Performance Computer Standard Commission Established

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ China's High Performance Computer Standard Commission was established on March 28, 2007, under the guidance of the Science and Technology Bureau of the Ministry of Information Industry. It will prepare relevant professional standards on high performance computers to break through the monopoly in the field by foreign manufacturers and vendors.

  4. Reduction of computer usage costs in predicting unsteady aerodynamic loadings caused by control surface motions: Analysis and results

    Science.gov (United States)

    Rowe, W. S.; Sebastian, J. D.; Petrarca, J. R.

    1979-01-01

    Results of theoretical and numerical investigations conducted to develop economical computing procedures were applied to an existing computer program that predicts unsteady aerodynamic loadings caused by leading and trailing edge control surface motions in subsonic compressible flow. Large reductions in computing costs were achieved by removing the spanwise singularity of the downwash integrand and evaluating its effect separately in closed form. Additional reductions were obtained by modifying the incremental pressure term that account for downwash singularities at control surface edges. Accuracy of theoretical predictions of unsteady loading at high reduced frequencies was increased by applying new pressure expressions that exactly satisified the high frequency boundary conditions of an oscillating control surface. Comparative computer result indicated that the revised procedures provide more accurate predictions of unsteady loadings as well as providing reduction of 50 to 80 percent in computer usage costs.

  5. High-throughput computing in the sciences.

    Science.gov (United States)

    Morgan, Mark; Grimshaw, Andrew

    2009-01-01

    While it is true that the modern computer is many orders of magnitude faster than that of yesteryear; this tremendous growth in CPU clock rates is now over. Unfortunately, however, the growth in demand for computational power has not abated; whereas researchers a decade ago could simply wait for computers to get faster, today the only solution to the growing need for more powerful computational resource lies in the exploitation of parallelism. Software parallelization falls generally into two broad categories--"true parallel" and high-throughput computing. This chapter focuses on the latter of these two types of parallelism. With high-throughput computing, users can run many copies of their software at the same time across many different computers. This technique for achieving parallelism is powerful in its ability to provide high degrees of parallelism, yet simple in its conceptual implementation. This chapter covers various patterns of high-throughput computing usage and the skills and techniques necessary to take full advantage of them. By utilizing numerous examples and sample codes and scripts, we hope to provide the reader not only with a deeper understanding of the principles behind high-throughput computing, but also with a set of tools and references that will prove invaluable as she explores software parallelism with her own software applications and research.

  6. Some Useful Cost-Benefit Criteria for Evaluating Computer-Based Test Delivery Models and Systems

    Science.gov (United States)

    Luecht, Richard M.

    2005-01-01

    Computer-based testing (CBT) is typically implemented using one of three general test delivery models: (1) multiple fixed testing (MFT); (2) computer-adaptive testing (CAT); or (3) multistage testing (MSTs). This article reviews some of the real cost drivers associated with CBT implementation--focusing on item production costs, the costs…

  7. Competing goals draw attention to effort, which then enters cost-benefit computations as input

    OpenAIRE

    2013-01-01

    Different to Kurzban et al., we conceptualize the experience of mental effort as the subjective costs of goal pursuit (i.e., the amount of invested resources relative to the amount of available resources). Rather than being an output of computations that compare costs and benefits of the target and competing goals, effort enters these computations as an input.

  8. Low Cost, High Efficiency, High Pressure Hydrogen Storage

    Energy Technology Data Exchange (ETDEWEB)

    Mark Leavitt

    2010-03-31

    A technical and design evaluation was carried out to meet DOE hydrogen fuel targets for 2010. These targets consisted of a system gravimetric capacity of 2.0 kWh/kg, a system volumetric capacity of 1.5 kWh/L and a system cost of $4/kWh. In compressed hydrogen storage systems, the vast majority of the weight and volume is associated with the hydrogen storage tank. In order to meet gravimetric targets for compressed hydrogen tanks, 10,000 psi carbon resin composites were used to provide the high strength required as well as low weight. For the 10,000 psi tanks, carbon fiber is the largest portion of their cost. Quantum Technologies is a tier one hydrogen system supplier for automotive companies around the world. Over the course of the program Quantum focused on development of technology to allow the compressed hydrogen storage tank to meet DOE goals. At the start of the program in 2004 Quantum was supplying systems with a specific energy of 1.1-1.6 kWh/kg, a volumetric capacity of 1.3 kWh/L and a cost of $73/kWh. Based on the inequities between DOE targets and Quantum’s then current capabilities, focus was placed first on cost reduction and second on weight reduction. Both of these were to be accomplished without reduction of the fuel system’s performance or reliability. Three distinct areas were investigated; optimization of composite structures, development of “smart tanks” that could monitor health of tank thus allowing for lower design safety factor, and the development of “Cool Fuel” technology to allow higher density gas to be stored, thus allowing smaller/lower pressure tanks that would hold the required fuel supply. The second phase of the project deals with three additional distinct tasks focusing on composite structure optimization, liner optimization, and metal.

  9. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  10. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. PRCA:A highly efficient computing architecture

    Institute of Scientific and Technical Information of China (English)

    Luo Xingguo

    2014-01-01

    Applications can only reach 8 %~15 % of utilization on modern computer systems. There are many obstacles to improving system efficiency. The key root is the conflict between the fixed general computer architecture and the variable requirements of applications. Proactive reconfigurable computing architecture (PRCA) is proposed to improve computing efficiency. PRCA dynamically constructs an efficient computing ar chitecture for a specific application via reconfigurable technology by perceiving requirements,workload and utilization of computing resources. Proactive decision support system (PDSS),hybrid reconfigurable computing array (HRCA) and reconfigurable interconnect (RIC) are intensively researched as the key technologies. The principles of PRCA have been verified with four applications on a test bed. It is shown that PRCA is feasible and highly efficient.

  12. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  13. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  14. A low-computational-cost inverse heat transfer technique for convective heat transfer measurements in hypersonic flows

    Science.gov (United States)

    Avallone, F.; Greco, C. S.; Schrijer, F. F. J.; Cardone, G.

    2015-04-01

    The measurement of the convective wall heat flux in hypersonic flows may be particularly challenging in the presence of high-temperature gradients and when using high-thermal-conductivity materials. In this case, the solution of multidimensional problems is necessary, but it considerably increases the computational cost. In this paper, a low-computational-cost inverse data reduction technique is presented. It uses a recursive least-squares approach in combination with the trust-region-reflective algorithm as optimization procedure. The computational cost is reduced by performing the discrete Fourier transform on the discrete convective heat flux function and by identifying the most relevant coefficients as objects of the optimization algorithm. In the paper, the technique is validated by means of both synthetic data, built in order to reproduce physical conditions, and experimental data, carried out in the Hypersonic Test Facility Delft at Mach 7.5 on two wind tunnel models having different thermal properties.

  15. Low cost, high tech seed cleaning

    Science.gov (United States)

    Robert P. Karrfalt

    2013-01-01

    Clean seeds are a great asset in native plant restoration. However, seed cleaning equipment is often too costly for many small operations. This paper introduces how several tools and materials intended for other purposes can be used directly or made into simple machines to clean seeds.

  16. The High Cost of Saving Energy Dollars.

    Science.gov (United States)

    Rose, Patricia

    1985-01-01

    In alternative financing a private company provides the capital and expertise for improving school energy efficiency. Savings are split between the school system and the company. Options for municipal leasing, cost sharing, and shared savings are explained along with financial, procedural, and legal considerations. (MLF)

  17. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  18. Versatile, low-cost, computer-controlled, sample positioning system for vacuum applications

    Science.gov (United States)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1991-01-01

    A versatile, low-cost, easy to implement, microprocessor-based motorized positioning system (MPS) suitable for accurate sample manipulation in a Second Ion Mass Spectrometry (SIMS) system, and for other ultra-high vacuum (UHV) applications was designed and built at NASA LeRC. The system can be operated manually or under computer control. In the latter case, local, as well as remote operation is possible via the IEEE-488 bus. The position of the sample can be controlled in three linear orthogonal and one angular coordinates.

  19. Case mix, quality and high-cost kidney transplant patients.

    Science.gov (United States)

    Englesbe, M J; Dimick, J B; Fan, Z; Baser, O; Birkmeyer, J D

    2009-05-01

    A better understanding of high-cost kidney transplant patients would be useful for informing value-based purchasing strategies by payers. This retrospective cohort study was based on the Medicare Provider Analysis and Review (MEDPAR) files from 2003 to 2006. The focus of this analysis was high-cost kidney transplant patients (patients that qualified for Medicare outlier payments and 30-day readmission payments). Using regression techniques, we explored relationships between high-cost kidney transplant patients, center-specific case mix, and center quality. Among 43 393 kidney transplants in Medicare recipients, 35.2% were categorized as high-cost patients. These payments represented 20% of total Medicare payments for kidney transplantation and exceeded $200 million over the study period. Case mix was associated with these payments and was an important factor underlying variation in hospital payments high-cost patients. Hospital quality was also a strong determinant of future Medicare payments for high-cost patients. Compared to high-quality centers, low-quality centers cost Medicare an additional $1185 per kidney transplant. Payments for high-cost patients represent a significant proportion of the total costs of kidney transplant surgical care. Quality improvement may be an important strategy for reducing the costs of kidney transplantation.

  20. Experiments with a low-cost system for computer graphics material model acquisition

    Science.gov (United States)

    Rushmeier, Holly; Lockerman, Yitzhak; Cartwright, Luke; Pitera, David

    2015-03-01

    We consider the design of an inexpensive system for acquiring material models for computer graphics rendering applications in animation, games and conceptual design. To be useful in these applications a system must be able to model a rich range of appearances in a computationally tractable form. The range of appearance of interest in computer graphics includes materials that have spatially varying properties, directionality, small-scale geometric structure, and subsurface scattering. To be computationally tractable, material models for graphics must be compact, editable, and efficient to numerically evaluate for ray tracing importance sampling. To construct appropriate models for a range of interesting materials, we take the approach of separating out directly and indirectly scattered light using high spatial frequency patterns introduced by Nayar et al. in 2006. To acquire the data at low cost, we use a set of Raspberry Pi computers and cameras clamped to miniature projectors. We explore techniques to separate out surface and subsurface indirect lighting. This separation would allow the fitting of simple, and so tractable, analytical models to features of the appearance model. The goal of the system is to provide models for physically accurate renderings that are visually equivalent to viewing the original physical materials.

  1. Case Mix, Quality and High-Cost Kidney Transplant Patients

    OpenAIRE

    Englesbe, M. J.; Dimick, J. B.; Fan, Z; Baser, O.; Birkmeyer, J. D.

    2009-01-01

    A better understanding of high-cost kidney transplant patients would be useful for informing value-based purchasing strategies by payers. This retrospective cohort study was based on the Medicare Provider Analysis and Review (MEDPAR) files from 2003 to 2006. The focus of this analysis was high-cost kidney transplant patients (patients that qualified for Medicare outlier payments and 30-day readmission payments). Using regression techniques, we explored relationships between high-cost kidney t...

  2. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  3. High School Physics and the Affordable Computer.

    Science.gov (United States)

    Harvey, Norman L.

    1978-01-01

    Explains how the computer was used in a high school physics course; Project Physics program and individualized study PSSC physics program. Evaluates the capabilities and limitations of a $600 microcomputer system. (GA)

  4. Ultra High Brightness/Low Cost Fiber Coupled Packaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — High peak power, high efficiency, high reliability lightweight, low cost QCW laser diode pump modules with up to 1000W of QCW output become possible with nLight's...

  5. Cloud Computing and Information Technology Resource Cost Management for SMEs

    DEFF Research Database (Denmark)

    Kuada, Eric; Adanu, Kwame; Olesen, Henning

    2013-01-01

    of the two options assuming the quality of service is identical across the options. The decision-making process was found to require substantial information gathering to identify explicit and implicit costs to inform the final decision. Careful considerations of decision time horizons also matter...

  6. Reduction of computer usage costs in predicting unsteady aerodynamic loadings caused by control surface motion. Addendum to computer program description

    Science.gov (United States)

    Rowe, W. S.; Petrarca, J. R.

    1980-01-01

    Changes to be made that provide increased accuracy and increased user flexibility in prediction of unsteady loadings caused by control surface motions are described. Analysis flexibility is increased by reducing the restrictions on the location of the downwash stations relative to the leading edge and the edges of the control surface boundaries. Analysis accuracy is increased in predicting unsteady loading for high Mach number analysis conditions through use of additional chordwise downwash stations. User guideline are presented to enlarge analysis capabilities of unusual wing control surface configurations. Comparative results indicate that the revised procedures provide accurate predictions of unsteady loadings as well as providing reductions of 40 to 75 percent in computer usage cost required by previous versions of this program.

  7. Is computer aided detection (CAD) cost effective in screening mammography? A model based on the CADET II study

    Science.gov (United States)

    2011-01-01

    Background Single reading with computer aided detection (CAD) is an alternative to double reading for detecting cancer in screening mammograms. The aim of this study is to investigate whether the use of a single reader with CAD is more cost-effective than double reading. Methods Based on data from the CADET II study, the cost-effectiveness of single reading with CAD versus double reading was measured in terms of cost per cancer detected. Cost (Pound (£), year 2007/08) of single reading with CAD versus double reading was estimated assuming a health and social service perspective and a 7 year time horizon. As the equipment cost varies according to the unit size a separate analysis was conducted for high, average and low volume screening units. One-way sensitivity analyses were performed by varying the reading time, equipment and assessment cost, recall rate and reader qualification. Results CAD is cost increasing for all sizes of screening unit. The introduction of CAD is cost-increasing compared to double reading because the cost of CAD equipment, staff training and the higher assessment cost associated with CAD are greater than the saving in reading costs. The introduction of single reading with CAD, in place of double reading, would produce an additional cost of £227 and £253 per 1,000 women screened in high and average volume units respectively. In low volume screening units, the high cost of purchasing the equipment will results in an additional cost of £590 per 1,000 women screened. One-way sensitivity analysis showed that the factors having the greatest effect on the cost-effectiveness of CAD with single reading compared with double reading were the reading time and the reader's professional qualification (radiologist versus advanced practitioner). Conclusions Without improvements in CAD effectiveness (e.g. a decrease in the recall rate) CAD is unlikely to be a cost effective alternative to double reading for mammography screening in UK. This study

  8. Is computer aided detection (CAD cost effective in screening mammography? A model based on the CADET II study

    Directory of Open Access Journals (Sweden)

    Wallis Matthew G

    2011-01-01

    Full Text Available Abstract Background Single reading with computer aided detection (CAD is an alternative to double reading for detecting cancer in screening mammograms. The aim of this study is to investigate whether the use of a single reader with CAD is more cost-effective than double reading. Methods Based on data from the CADET II study, the cost-effectiveness of single reading with CAD versus double reading was measured in terms of cost per cancer detected. Cost (Pound (£, year 2007/08 of single reading with CAD versus double reading was estimated assuming a health and social service perspective and a 7 year time horizon. As the equipment cost varies according to the unit size a separate analysis was conducted for high, average and low volume screening units. One-way sensitivity analyses were performed by varying the reading time, equipment and assessment cost, recall rate and reader qualification. Results CAD is cost increasing for all sizes of screening unit. The introduction of CAD is cost-increasing compared to double reading because the cost of CAD equipment, staff training and the higher assessment cost associated with CAD are greater than the saving in reading costs. The introduction of single reading with CAD, in place of double reading, would produce an additional cost of £227 and £253 per 1,000 women screened in high and average volume units respectively. In low volume screening units, the high cost of purchasing the equipment will results in an additional cost of £590 per 1,000 women screened. One-way sensitivity analysis showed that the factors having the greatest effect on the cost-effectiveness of CAD with single reading compared with double reading were the reading time and the reader's professional qualification (radiologist versus advanced practitioner. Conclusions Without improvements in CAD effectiveness (e.g. a decrease in the recall rate CAD is unlikely to be a cost effective alternative to double reading for mammography screening

  9. Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.

    Science.gov (United States)

    Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P

    2010-12-22

    Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.

  10. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  11. Computers and Social Knowledge; Opportunities and Opportunity Cost.

    Science.gov (United States)

    Hartoonian, Michael

    Educators must use computers to move society beyond the information age and toward the age of wisdom. The movement toward knowledge and wisdom constitutes an evolution beyond the "third wave" or electronic/information age, the phase of history in which, according to Alvin Toffler, we are now living. We are already moving into a fourth wave, the…

  12. High efficiency low cost GaAs/Ge cell technology

    Science.gov (United States)

    Ho, Frank

    1990-01-01

    Viewgraphs on high efficiency low cost GaAs/Ge cell technology are presented. Topics covered include: high efficiency, low cost GaAs/Ge solar cells; advantages of Ge; comparison of typical production cells for space applications; panel level comparisons; and solar cell technology trends.

  13. High-Efficiency Solar Cells on Low-Cost Substrates

    Science.gov (United States)

    Daiello, R. V.; Robinson, P. H.

    1982-01-01

    High-efficiency solar cells made in thin epitaxial films grown on low-cost commercial silicon substrates. Cost of cells is much less than if high-quality single-crystal silicon were used for substrates and performance of cells is almost as good.

  14. Why projects often fail even with high cost contingencies

    Energy Technology Data Exchange (ETDEWEB)

    Kujawski, Edouard

    2002-02-28

    In this note we assume that the individual risks have been adequately quantified and the total project cost contingency adequately computed to ensure an agreed-to probability or confidence level that the total project cost estimate will not be exceeded. But even projects that implement such a process are likely to result in significant cost overruns and/or project failure if the project manager allocates the contingencies to the individual subsystems. The intuitive and mathematically valid solution is to maintain a project-wide contingency and to distribute it to the individual risks on an as-needed basis. Such an approach ensures cost-efficient risk management, and projects that implement it are more likely to succeed and to cost less. We illustrate these ideas using a simplified project with two independent risks. The formulation can readily be extended to multiple risks.

  15. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  16. High Channel Count, Low Cost, Multiplexed FBG Sensor Systems

    Institute of Scientific and Technical Information of China (English)

    J. J. Pan; FengQing Zhou; Kejian Guan; Joy Jiang; Liang Dong; Albert Li; Xiangdong Qiu; Jonathan Zhang

    2003-01-01

    With rich products development experience in WDM telecommunication networks, we introduce a few of high channel count, multiplexed FBG fiber optic sensor systems featured in reliable high performance and low cost.

  17. Estimating boiling water reactor decommissioning costs: A user`s manual for the BWR Cost Estimating Computer Program (CECP) software. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Bierschbach, M.C. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-06-01

    Nuclear power plant licensees are required to submit to the US Nuclear Regulatory Commission (NRC) for review their decommissioning cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personal computer, provides estimates for the cost of decommissioning boiling water reactor (BWR) power stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

  18. Estimating boiling water reactor decommissioning costs. A user`s manual for the BWR Cost Estimating Computer Program (CECP) software: Draft report for comment

    Energy Technology Data Exchange (ETDEWEB)

    Bierschbach, M.C. [Pacific Northwest Lab., Richland, WA (United States)

    1994-12-01

    With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the U.S. Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personal computer, provides estimates for the cost of decommissioning BWR power stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

  19. Obstacle detection algorithm of low computational cost for Guanay II AUV

    OpenAIRE

    Galarza Bogotá, Cesar Mauricio; Prat Tasias, Jordi; Gomáriz Castro, Spartacus

    2016-01-01

    Obstacle detection is one of the most important stages in the obstacle avoidance system. This work is focused to explain the operation of a designed and implemented for the overall detection of objects with low computational cost strategy. This strategy of low computational cost is based on performing a spatial segmentation of the information obtained by the SONAR and determine the minimum distance between the SONAR (AUV) and the obstacle. Peer Reviewed

  20. Obstacle detection algorithm of low computational cost for Guanay II AUV

    OpenAIRE

    Galarza Bogotá, Cesar Mauricio; Prat Tasias, Jordi; Gomáriz Castro, Spartacus

    2016-01-01

    Obstacle detection is one of the most important stages in the obstacle avoidance system. This work is focused to explain the operation of a designed and implemented for the overall detection of objects with low computational cost strategy. This strategy of low computational cost is based on performing a spatial segmentation of the information obtained by the SONAR and determine the minimum distance between the SONAR (AUV) and the obstacle.

  1. Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application

    Science.gov (United States)

    Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.

    2013-12-01

    The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.

  2. Robust Coding for Lossy Computing with Observation Costs

    CERN Document Server

    Ahmadi, Behzad

    2011-01-01

    An encoder wishes to minimize the bit rate necessary to guarantee that a decoder is able to calculate a symbol-wise function of a sequence available only at the encoder and a sequence that can be measured only at the decoder. This classical problem, first studied by Yamamoto, is addressed here by including two new aspects: (i) The decoder obtains noisy measurements of its sequence, where the quality of such measurements can be controlled via a cost-constrained "action" sequence, which is taken at the decoder or at the encoder; (ii) Measurement at the decoder may fail in a way that is unpredictable to the encoder, thus requiring robust encoding. The considered scenario generalizes known settings such as the Heegard-Berger-Kaspi and the "source coding with a vending machine" problems. The rate-distortion-cost function is derived in relevant special cases, along with general upper and lower bounds. Numerical examples are also worked out to obtain further insight into the optimal system design.

  3. A Mathematical Model for Project Planning and Cost Analysis in Computer Assisted Instruction.

    Science.gov (United States)

    Fitzgerald, William F.

    Computer-assisted instruction (CAI) has become sufficiently widespread to require attention to the relationships between its costs, administration and benefits. Despite difficulties in instituting them, quantifiable cost-effectiveness analyses offer several advantages. They allow educators to specify with precision anticipated instructional loads,…

  4. Two Computer Programs for Equipment Cost Estimation and Economic Evaluation of Chemical Processes.

    Science.gov (United States)

    Kuri, Carlos J.; Corripio, Armando B.

    1984-01-01

    Describes two computer programs for use in process design courses: an easy-to-use equipment cost estimation program based on latest cost correlations available and an economic evaluation program which calculates two profitability indices. Comparisons between programed and hand-calculated results are included. (JM)

  5. Capital cost: low and high sulfur coal plants; 800 MWe

    Energy Technology Data Exchange (ETDEWEB)

    None

    1978-01-01

    This Commercial Electric Power Cost Study for 800-MWe (Nominal) low- and high-sulfur coal plants consists of three volumes. (This is the fourth subject in a series of eight performed in the Commercial Electric Power Cost Studies by the US NRC). The low-sulfur coal plant is described in Volumes I and II, while Volume III (this volume) describes the high sulfur coal plant. The design basis, drawings, and summary cost estimate for a 794-MWe high-sulfur coal plant are presented in this volume. This information was developed by redesigning the low-sulfur sub-bituminous coal plant for burning high-sulfur bituminous coal. The reference design includes a lime flue-gas-desulfurization system. These coal plants utilize a mechanical draft (wet) cooling tower system for condenser heat removal. Costs of alternate cooling systems are provided in Report No. 7 in this series of studies of costs of commercial electrical power plants.

  6. High-performance computing for airborne applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Manuzzato, Andrea [Los Alamos National Laboratory; Fairbanks, Tom [Los Alamos National Laboratory; Dallmann, Nicholas [Los Alamos National Laboratory; Desgeorges, Rose [Los Alamos National Laboratory

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  7. A low computation cost method for seizure prediction.

    Science.gov (United States)

    Zhang, Yanli; Zhou, Weidong; Yuan, Qi; Wu, Qi

    2014-10-01

    The dynamic changes of electroencephalograph (EEG) signals in the period prior to epileptic seizures play a major role in the seizure prediction. This paper proposes a low computation seizure prediction algorithm that combines a fractal dimension with a machine learning algorithm. The presented seizure prediction algorithm extracts the Higuchi fractal dimension (HFD) of EEG signals as features to classify the patient's preictal or interictal state with Bayesian linear discriminant analysis (BLDA) as a classifier. The outputs of BLDA are smoothed by a Kalman filter for reducing possible sporadic and isolated false alarms and then the final prediction results are produced using a thresholding procedure. The algorithm was evaluated on the intracranial EEG recordings of 21 patients in the Freiburg EEG database. For seizure occurrence period of 30 min and 50 min, our algorithm obtained an average sensitivity of 86.95% and 89.33%, an average false prediction rate of 0.20/h, and an average prediction time of 24.47 min and 39.39 min, respectively. The results confirm that the changes of HFD can serve as a precursor of ictal activities and be used for distinguishing between interictal and preictal epochs. Both HFD and BLDA classifier have a low computational complexity. All of these make the proposed algorithm suitable for real-time seizure prediction.

  8. A Web-Based Computer-Tailored Alcohol Prevention Program for Adolescents: Cost-Effectiveness and Intersectoral Costs and Benefits

    Science.gov (United States)

    2016-01-01

    Background Preventing excessive alcohol use among adolescents is important not only to foster individual and public health, but also to reduce alcohol-related costs inside and outside the health care sector. Computer tailoring can be both effective and cost-effective for working with many lifestyle behaviors, yet the available information on the cost-effectiveness of computer tailoring for reducing alcohol use by adolescents is limited as is information on the costs and benefits pertaining to sectors outside the health care sector, also known as intersectoral costs and benefits (ICBs). Objective The aim was to assess the cost-effectiveness of a Web-based computer-tailored intervention for reducing alcohol use and binge drinking by adolescents from a health care perspective (excluding ICBs) and from a societal perspective (including ICBs). Methods Data used were from the Alcoholic Alert study, a cluster randomized controlled trial with randomization at the level of schools into two conditions. Participants either played a game with tailored feedback on alcohol awareness after the baseline assessment (intervention condition) or received care as usual (CAU), meaning that they had the opportunity to play the game subsequent to the final measurement (waiting list control condition). Data were recorded at baseline (T0=January/February 2014) and after 4 months (T1=May/June 2014) and were used to calculate incremental cost-effectiveness ratios (ICERs), both from a health care perspective and a societal perspective. Stochastic uncertainty in the data was dealt with by using nonparametric bootstraps (5000 simulated replications). Additional sensitivity analyses were conducted based on excluding cost outliers. Subgroup cost-effectiveness analyses were conducted based on several background variables, including gender, age, educational level, religion, and ethnicity. Results From both the health care perspective and the societal perspective for both outcome measures, the

  9. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  10. Next-generation sequencing: big data meets high performance computing.

    Science.gov (United States)

    Schmidt, Bertil; Hildebrandt, Andreas

    2017-02-02

    The progress of next-generation sequencing has a major impact on medical and genomic research. This high-throughput technology can now produce billions of short DNA or RNA fragments in excess of a few terabytes of data in a single run. This leads to massive datasets used by a wide range of applications including personalized cancer treatment and precision medicine. In addition to the hugely increased throughput, the cost of using high-throughput technologies has been dramatically decreasing. A low sequencing cost of around US$1000 per genome has now rendered large population-scale projects feasible. However, to make effective use of the produced data, the design of big data algorithms and their efficient implementation on modern high performance computing systems is required.

  11. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  12. Modelling the Intention to Adopt Cloud Computing Services: A Transaction Cost Theory Perspective

    Directory of Open Access Journals (Sweden)

    Ogan Yigitbasioglu

    2014-11-01

    Full Text Available This paper uses transaction cost theory to study cloud computing adoption. A model is developed and tested with data from an Australian survey. According to the results, perceived vendor opportunism and perceived legislative uncertainty around cloud computing were significantly associated with perceived cloud computing security risk. There was also a significant negative relationship between perceived cloud computing security risk and the intention to adopt cloud services. This study also reports on adoption rates of cloud computing in terms of applications, as well as the types of services used.

  13. Parents and the High Cost of Child Care: 2014 Report

    Science.gov (United States)

    Wood, Stephen; Fraga, Lynette; McCready, Michelle

    2014-01-01

    Eleven million children younger than age five are in some form of child care in the United States. The "Parents and the High Cost of Child Care: 2014 Report" summarizes the cost of child care across the country, examines the importance of child care as a workforce support and as an early learning program, and explores the effect of high…

  14. Parents and the High Cost of Child Care: 2015 Report

    Science.gov (United States)

    Fraga, Lynette; Dobbins, Dionne; McCready, Michelle

    2015-01-01

    Eleven million children younger than age five are in some form of child care in the United States. The "Parents and the High Cost of Child Care: 2015 Report" summarizes the cost of child care across the country, examines the importance of child care as a workforce support and as an early learning program, and explores the effect of high…

  15. Linear algebra on high-performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J.; Sorensen, D.C.

    1986-01-01

    This paper surveys work recently done at Argonne National Laboratory in an attempt to discover ways to construct numerical software for high-performance computers. The numerical algorithms are taken from several areas of numerical linear algebra. We discuss certain architectural features of advanced-computer architectures that will affect the design of algorithms. The technique of restructuring algorithms in terms of certain modules is reviewed. This technique has proved successful in obtaining a high level of transportability without severe loss of performance on a wide variety of both vector and parallel computers. The module technique is demonstrably effective for dense linear algebra problems. However, in the case of sparse and structured problems it may be difficult to identify general modules that will be as effective. New algorithms have been devised for certain problems in this category. We present examples in three important areas: banded systems, sparse QR factorization, and symmetric eigenvalue problems. 32 refs., 10 figs., 6 tabs.

  16. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej

    2015-02-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver are of order O(log(N)p2) for the one dimensional (1D) case, O(Np2) for the two dimensional (2D) case, and O(N4/3p2) for the three dimensional (3D) case, where N is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of p and N. For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about 20% for 256 processors for a 3D example with 1283 unknowns and linear B-splines with C0 global continuity, and 15% for a 3D example with 643 unknowns and quartic B-splines with C3 global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.

  17. Cost optimization of load carrying thin-walled precast high performance concrete sandwich panels

    DEFF Research Database (Denmark)

    Hodicky, Kamil; Hansen, Sanne; Hulin, Thomas

    2015-01-01

    and HPCSP’s geometrical parameters as well as on material cost function in the HPCSP design. Cost functions are presented for High Performance Concrete (HPC), insulation layer, reinforcement and include labour-related costs. The present study reports the economic data corresponding to specific manufacturing......The paper describes a procedure to find the structurally and thermally efficient design of load-carrying thin-walled precast High Performance Concrete Sandwich Panels (HPCSP) with an optimal economical solution. A systematic optimization approach is based on the selection of material’s performances....... The solution of the optimization problem is performed in the computer package software Matlab® with SQPlab package and integrates the processes of HPCSP design, quantity take-off and cost estimation. The proposed optimization process outcomes in complex HPCSP design proposals to achieve minimum cost of HPCSP....

  18. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  19. Computing what the public wants: some issues in road safety cost-benefit analysis.

    Science.gov (United States)

    Hauer, Ezra

    2011-01-01

    In road safety, as in other fields, cost-benefit analysis (CBA) is used to justify the investment of public money and to establish priority between projects. It amounts to a computation by which 'few' - the CB analysts - aim to determine what the 'many' - those on behalf of which the choice is to be made - would choose. The question is whether there are grounds to believe that the tool fits the aim. I argue that the CBA tool is deficient. First, because estimates of the value of statistical life and injury on which the CBA computation rests are all over the place, inconsistent with the value of time estimates, and government guidance on the matter appears to be arbitrary. Second, because the premises of New Welfare Economics on which the CBA is founded apply only in circumstances which, in road safety, are rare. Third, because the CBA requires the computation of present values which must be questioned when the discounting is of future lives and of time. Because time savings are valued too highly when compared to life and because discounting tends to unjustifiably diminish the value of lives saved in the future, the CBA tends to bias decisions against investment in road safety.

  20. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  1. A real-time fault-tolerant scheduling algorithm with low dependability cost in on-board computer system

    Institute of Scientific and Technical Information of China (English)

    WANG Pei-dong; WEI Zhen-hua

    2008-01-01

    To make the on-board computer system more dependable and real-time in a satellite, an algorithm of the fault-tolerant scheduling in the on-board computer system with high priority recovery is proposed in this paper. This algorithm can schedule the on-board fault-tolerant tasks in real time. Due to the use of dependability cost, the overhead of scheduling the fault-tolerant tasks can be reduced. The mechanism of the high priority recovery will improve the response to recovery tasks. The fault-tolerant scheduling model is presented simulation results validate the correctness and feasibility of the proposed algorithm.

  2. High performance computing and communications panel report

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-01

    In FY92, a presidential initiative entitled High Performance Computing and Communications (HPCC) was launched, aimed at securing U.S. preeminence in high performance computing and related communication technologies. The stated goal of the initiative is threefold: extend U.S. technological leadership in high performance computing and computer communications; provide wide dissemination and application of the technologies; and spur gains in U.S. productivity and industrial competitiveness, all within the context of the mission needs of federal agencies. Because of the importance of the HPCC program to the national well-being, especially its potential implication for industrial competitiveness, the Assistant to the President for Science and Technology has asked that the President's Council of Advisors in Science and Technology (PCAST) establish a panel to advise PCAST on the strengths and weaknesses of the HPCC program. The report presents a program analysis based on strategy, balance, management, and vision. Both constructive recommendations for program improvement and positive reinforcement of successful program elements are contained within the report.

  3. A simple, low-cost, data logging pendulum built from a computer mouse

    Energy Technology Data Exchange (ETDEWEB)

    Gintautas, Vadas [Los Alamos National Laboratory; Hubler, Alfred [UIUC

    2009-01-01

    Lessons and homework problems involving a pendulum are often a big part of introductory physics classes and laboratory courses from high school to undergraduate levels. Although laboratory equipment for pendulum experiments is commercially available, it is often expensive and may not be affordable for teachers on fixed budgets, particularly in developing countries. We present a low-cost, easy-to-build rotary sensor pendulum using the existing hardware in a ball-type computer mouse. We demonstrate how this apparatus may be used to measure both the frequency and coefficient of damping of a simple physical pendulum. This easily constructed laboratory equipment makes it possible for all students to have hands-on experience with one of the most important simple physical systems.

  4. High-Precision Computation and Mathematical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  5. High Available COTS Based Computer for Space

    Science.gov (United States)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  6. Does Not Compute: The High Cost of Low Technology Skills in the U.S.--and What We Can Do about It. Vital Signs: Reports on the Condition of STEM Learning in the U.S.

    Science.gov (United States)

    Change the Equation, 2015

    2015-01-01

    Although American millennials are the first generation of "digital natives"--that is, people who grew up with computers and the internet--they are not very tech savvy. Using technology for social networking, surfing the web, or taking selfies is a far cry from using it to solve complex problems at work or at home. Truly tech savvy people…

  7. PREFACE: High Performance Computing Symposium 2011

    Science.gov (United States)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  8. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  9. A Phenomenological Cost Model for High Energy Particle Accelerators

    CERN Document Server

    Shiltsev, Vladimir

    2014-01-01

    Accelerator-based high-energy physics have been in the forefront of scientific discoveries for more than half a century. The accelerator technology of the colliders has progressed immensely, while the beam energy, luminosity, facility size, and cost have grown by several orders of magnitude. The method of colliding beams has not fully exhausted its potential but has slowed down considerably in its progress. In this paper we derive a simple scaling model for the cost of large accelerators and colliding beam facilities based on costs of 17 big facilities which have been either built or carefully estimated. Although this approach cannot replace an actual cost estimate based on an engineering design, this parameterization is to indicate a somewhat realistic cost range for consideration of what future frontier accelerator facilities might be fiscally realizable.

  10. WHAT DRIVES HIGH COST OF FINANCE IN MOLDOVA?

    Directory of Open Access Journals (Sweden)

    Alexandru Stratan

    2012-03-01

    Full Text Available Why there are high costs to finance in Republic of Moldova? Is it a problem for business environment?These are the questions discussed in this paper. Following the well know Growth Diagnostics approach byHausmann, Rodrik and Velasco, authors assess the barriers and impediments to access to finance in Republic ofMoldova. Guided by international and national statistics we found evidence of poor intermediation, poorinstitutions, high level of inflation, and high collateral as major causes of high cost of financial resources inRepublic of Moldova. At the end of the study authors give policy recommendations identifying other related fieldsto be addressed.

  11. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  12. Monitoring SLAC High Performance UNIX Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  13. An Algorithm for Optimized Time, Cost, and Reliability in a Distributed Computing System

    Directory of Open Access Journals (Sweden)

    Pankaj Saxena

    2013-03-01

    Full Text Available Distributed Computing System (DCS refers to multiple computer systems working on a single problem. A distributed system consists of a collection of autonomous computers, connected through a network which enables computers to coordinate their activities and to share the resources of the system. In distributed computing, a single problem is divided into many parts, and each part is solved by different computers. As long as the computers are networked, they can communicate with each other to solve the problem. DCS consists of multiple software components that are on multiple computers, but run as a single system. The computers that are in a distributed system can be physically close together and connected by a local network, or they can be geographically distant and connected by a wide area network. The ultimate goal of distributed computing is to maximize performance in a time effective, cost-effective, and reliability effective manner. In DCS the whole workload is divided into small and independent units, called tasks and it allocates onto the available processors. It also ensures fault tolerance and enables resource accessibility in the event that one of the components fails. The problem is addressed of assigning a task to a distributed computing system. The assignment of the modules of tasks is done statically. We have to give an algorithm to solve the problem of static task assignment in DCS, i.e. given a set of communicating tasks to be executed on a distributed system on a set of processors, to which processor should each task be assigned to get the more reliable results in lesser time and cost. In this paper an efficient algorithm for task allocation in terms of optimum time or optimum cost or optimum reliability is presented where numbers of tasks are more then the number of processors.

  14. Costs of cloud computing for a biometry department. A case study.

    Science.gov (United States)

    Knaus, J; Hieke, S; Binder, H; Schwarzer, G

    2013-01-01

    "Cloud" computing providers, such as the Amazon Web Services (AWS), offer stable and scalable computational resources based on hardware virtualization, with short, usually hourly, billing periods. The idea of pay-as-you-use seems appealing for biometry research units which have only limited access to university or corporate data center resources or grids. This case study compares the costs of an existing heterogeneous on-site hardware pool in a Medical Biometry and Statistics department to a comparable AWS offer. The "total cost of ownership", including all direct costs, is determined for the on-site hardware, and hourly prices are derived, based on actual system utilization during the year 2011. Indirect costs, which are difficult to quantify are not included in this comparison, but nevertheless some rough guidance from our experience is given. To indicate the scale of costs for a methodological research project, a simulation study of a permutation-based statistical approach is performed using AWS and on-site hardware. In the presented case, with a system utilization of 25-30 percent and 3-5-year amortization, on-site hardware can result in smaller costs, compared to hourly rental in the cloud dependent on the instance chosen. Renting cloud instances with sufficient main memory is a deciding factor in this comparison. Costs for on-site hardware may vary, depending on the specific infrastructure at a research unit, but have only moderate impact on the overall comparison and subsequent decision for obtaining affordable scientific computing resources. Overall utilization has a much stronger impact as it determines the actual computing hours needed per year. Taking this into ac count, cloud computing might still be a viable option for projects with limited maturity, or as a supplement for short peaks in demand.

  15. Capital cost: low and high sulfur coal plants; 800 MWe

    Energy Technology Data Exchange (ETDEWEB)

    None

    1978-01-01

    The Commercial Electric Power Cost Study for 800-MWe (Nominal) low- and high-sulfur coal plants consists of three volumes. (This the fourth subject in a series of eight performed in the Commercial Electric Power Cost Studies by the US NRC). The low-sulfur coal plant is described in Volumes I and II (this volume), while Volume III describes the high-sulfur coal plant. The design basis and cost estimate for the 801-MWe low-sulfur coal plant is presented in Volume I and the drawings, equipment list, and site description are contained in this document. The design basis, drawings, and summary cost estimate for a 794-MWe high-sulfur coal plant are presented in Volume III. This information was developed by redesigning the low-sulfur sub-bituminous coal plant for burning high-sulfur bituminous coal. The reference design includes a lime flue gas desulfurization system. These coal plants utilize a mechanical draft (wet) cooling tower system for condenser heat removal. Costs of alternate cooling systems are provided in Report No. 7 in this series of studies of costs of commercial electrical power plants.

  16. The concept of computer software designed to identify and analyse logistics costs in agricultural enterprises

    Directory of Open Access Journals (Sweden)

    Karol Wajszczyk

    2009-01-01

    Full Text Available The study comprised research, development and computer programming works concerning the development of a concept for the IT tool to be used in the identification and analysis of logistics costs in agricultural enterprises in terms of the process-based approach. As a result of research and programming work an overall functional and IT concept of software was developed for the identification and analysis of logistics costs for agricultural enterprises.

  17. The Department of Defense and the Power of Cloud Computing: Weighing Acceptable Cost Versus Acceptable Risk

    Science.gov (United States)

    2016-04-01

    commercial only,” “private only,” and “hybrid” models, the strengths and weakness of each will be shown. Finally, a series of recommendations will...importantly, it should not be done at the individual program level as is the current practice. The migration should take place if it is more cost -effective...Computing Weighing Acceptable Cost versus Acceptable Risk Steven C. Dudash Major, Ohio Air National Guard Wright Flyer Paper No. 52 Air University

  18. A low-cost vector processor boosting compute-intensive image processing operations

    Science.gov (United States)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  19. High-Efficient Low-Cost Photovoltaics Recent Developments

    CERN Document Server

    Petrova-Koch, Vesselinka; Goetzberger, Adolf

    2009-01-01

    A bird's-eye view of the development and problems of recent photovoltaic cells and systems and prospects for Si feedstock is presented. High-efficient low-cost PV modules, making use of novel efficient solar cells (based on c-Si or III-V materials), and low cost solar concentrators are in the focus of this book. Recent developments of organic photovoltaics, which is expected to overcome its difficulties and to enter the market soon, are also included.

  20. The design of linear algebra libraries for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J. [Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States); Walker, D.W. [Oak Ridge National Lab., TN (United States)

    1993-08-01

    This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct higher-level algorithms, and hide many details of the parallelism from the application developer. The block-cyclic data distribution is described, and adopted as a good way of distributing block-partitioned matrices. Block-partitioned versions of the Cholesky and LU factorizations are presented, and optimization issues associated with the implementation of the LU factorization algorithm on distributed memory concurrent computers are discussed, together with its performance on the Intel Delta system. Finally, approaches to the design of library interfaces are reviewed.

  1. Computational Sensing Using Low-Cost and Mobile Plasmonic Readers Designed by Machine Learning.

    Science.gov (United States)

    Ballard, Zachary S; Shir, Daniel; Bhardwaj, Aashish; Bazargan, Sarah; Sathianathan, Shyama; Ozcan, Aydogan

    2017-02-28

    Plasmonic sensors have been used for a wide range of biological and chemical sensing applications. Emerging nanofabrication techniques have enabled these sensors to be cost-effectively mass manufactured onto various types of substrates. To accompany these advances, major improvements in sensor read-out devices must also be achieved to fully realize the broad impact of plasmonic nanosensors. Here, we propose a machine learning framework which can be used to design low-cost and mobile multispectral plasmonic readers that do not use traditionally employed bulky and expensive stabilized light sources or high-resolution spectrometers. By training a feature selection model over a large set of fabricated plasmonic nanosensors, we select the optimal set of illumination light-emitting diodes needed to create a minimum-error refractive index prediction model, which statistically takes into account the varied spectral responses and fabrication-induced variability of a given sensor design. This computational sensing approach was experimentally validated using a modular mobile plasmonic reader. We tested different plasmonic sensors with hexagonal and square periodicity nanohole arrays and revealed that the optimal illumination bands differ from those that are "intuitively" selected based on the spectral features of the sensor, e.g., transmission peaks or valleys. This framework provides a universal tool for the plasmonics community to design low-cost and mobile multispectral readers, helping the translation of nanosensing technologies to various emerging applications such as wearable sensing, personalized medicine, and point-of-care diagnostics. Beyond plasmonics, other types of sensors that operate based on spectral changes can broadly benefit from this approach, including e.g., aptamer-enabled nanoparticle assays and graphene-based sensors, among others.

  2. Novel Low Cost, High Reliability Wind Turbine Drivetrain

    Energy Technology Data Exchange (ETDEWEB)

    Chobot, Anthony; Das, Debarshi; Mayer, Tyler; Markey, Zach; Martinson, Tim; Reeve, Hayden; Attridge, Paul; El-Wardany, Tahany

    2012-09-13

    Clipper Windpower, in collaboration with United Technologies Research Center, the National Renewable Energy Laboratory, and Hamilton Sundstrand Corporation, developed a low-cost, deflection-compliant, reliable, and serviceable chain drive speed increaser. This chain and sprocket drivetrain design offers significant breakthroughs in the areas of cost and serviceability and addresses the key challenges of current geared and direct-drive systems. The use of gearboxes has proven to be challenging; the large torques and bending loads associated with use in large multi-MW wind applications have generally limited demonstrated lifetime to 8-10 years [1]. The large cost of gearbox replacement and the required use of large, expensive cranes can result in gearbox replacement costs on the order of $1M, representing a significant impact to overall cost of energy (COE). Direct-drive machines eliminate the gearbox, thereby targeting increased reliability and reduced life-cycle cost. However, the slow rotational speeds require very large and costly generators, which also typically have an undesirable dependence on expensive rare-earth magnet materials and large structural penalties for precise air gap control. The cost of rare-earth materials has increased 20X in the last 8 years representing a key risk to ever realizing the promised cost of energy reductions from direct-drive generators. A common challenge to both geared and direct drive architectures is a limited ability to manage input shaft deflections. The proposed Clipper drivetrain is deflection-compliant, insulating later drivetrain stages and generators from off-axis loads. The system is modular, allowing for all key parts to be removed and replaced without the use of a high capacity crane. Finally, the technology modularity allows for scalability and many possible drivetrain topologies. These benefits enable reductions in drivetrain capital cost by 10.0%, levelized replacement and O&M costs by 26.7%, and overall cost of

  3. B-2 Extremely High Frequency SATCOM and Computer Increment 1 (B-2 EHF Inc 1)

    Science.gov (United States)

    2015-12-01

    Selected Acquisition Report (SAR) RCS: DD-A&T(Q&A)823-224 B-2 Extremely High Frequency SATCOM and Computer Increment 1 (B-2 EHF Inc 1) As of FY...10 Track to Budget 11 Cost and Funding 13 Low Rate Initial Production 19 Foreign Military Sales 20 Nuclear Costs 20 Unit Cost...Document CLIN - Contract Line Item Number CPD - Capability Production Document CY - Calendar Year DAB - Defense Acquisition Board DAE - Defense

  4. 42 CFR 412.84 - Payment for extraordinarily high-cost cases (cost outliers).

    Science.gov (United States)

    2010-10-01

    ... outliers). 412.84 Section 412.84 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF... Payments for Outlier Cases, Special Treatment Payment for New Technology, and Payment Adjustment for Certain Replaced Devices Payment for Outlier Cases § 412.84 Payment for extraordinarily high-cost cases...

  5. Low-cost computer mouse for the elderly or disabled in Taiwan.

    Science.gov (United States)

    Chen, C-C; Chen, W-L; Chen, B-N; Shih, Y-Y; Lai, J-S; Chen, Y-L

    2014-01-01

    A mouse is an important communication interface between a human and a computer, but it is still difficult to use for the elderly or disabled. To develop a low-cost computer mouse auxiliary tool. The principal structure of the low-cost mouse auxiliary tool is the IR (infrared ray) array module and the Wii icon sensor module, which combine with reflective tape and the SQL Server database. This has several benefits including cheap hardware cost, fluent control, prompt response, adaptive adjustment and portability. Also, it carries the game module with the function of training and evaluation; to the trainee, it is really helpful to upgrade the sensitivity of consciousness/sense and the centralization of attention. The intervention phase/maintenance phase, with regard to clicking accuracy and use of time, p value (pdevelopment of the low cost adaptive computer mouse auxiliary tool was completed during the study and was also verified as having the characteristics of low cost, easy operation and the adaptability. To patients with physical disabilities, if they have independent control action parts of their limbs, the mouse auxiliary tool is suitable for them to use, i.e. the user only needs to paste the reflective tape by the independent control action parts of the body to operate the mouse auxiliary tool.

  6. Computer vision for high content screening.

    Science.gov (United States)

    Kraus, Oren Z; Frey, Brendan J

    2016-01-01

    High Content Screening (HCS) technologies that combine automated fluorescence microscopy with high throughput biotechnology have become powerful systems for studying cell biology and drug screening. These systems can produce more than 100 000 images per day, making their success dependent on automated image analysis. In this review, we describe the steps involved in quantifying microscopy images and different approaches for each step. Typically, individual cells are segmented from the background using a segmentation algorithm. Each cell is then quantified by extracting numerical features, such as area and intensity measurements. As these feature representations are typically high dimensional (>500), modern machine learning algorithms are used to classify, cluster and visualize cells in HCS experiments. Machine learning algorithms that learn feature representations, in addition to the classification or clustering task, have recently advanced the state of the art on several benchmarking tasks in the computer vision community. These techniques have also recently been applied to HCS image analysis.

  7. The cognitive dynamics of computer science cost-effective large scale software development

    CERN Document Server

    De Gyurky, Szabolcs Michael; John Wiley & Sons

    2006-01-01

    This book has three major objectives: To propose an ontology for computer software; To provide a methodology for development of large software systems to cost and schedule that is based on the ontology; To offer an alternative vision regarding the development of truly autonomous systems.

  8. Computational Comparison of Several Greedy Algorithms for the Minimum Cost Perfect Matching Problem on Large Graphs

    DEFF Research Database (Denmark)

    Wøhlk, Sanne; Laporte, Gilbert

    2017-01-01

    The aim of this paper is to computationally compare several algorithms for the Minimum Cost Perfect Matching Problem on an undirected complete graph. Our work is motivated by the need to solve large instances of the Capacitated Arc Routing Problem (CARP) arising in the optimization of garbage...

  9. Effectiveness of Multimedia Elements in Computer Supported Instruction: Analysis of Personalization Effects, Students' Performances and Costs

    Science.gov (United States)

    Zaidel, Mark; Luo, XiaoHui

    2010-01-01

    This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…

  10. Effectiveness of Multimedia Elements in Computer Supported Instruction: Analysis of Personalization Effects, Students' Performances and Costs

    Science.gov (United States)

    Zaidel, Mark; Luo, XiaoHui

    2010-01-01

    This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…

  11. Low-cost addition-subtraction sequences for the final exponentiation computation in pairings

    DEFF Research Database (Denmark)

    Guzmán-Trampe, Juan E; Cruz-Cortéz, Nareli; Dominguez Perez, Luis

    2014-01-01

    In this paper, we address the problem of finding low cost addition–subtraction sequences for situations where a doubling step is significantly cheaper than a non-doubling one. One application of this setting appears in the computation of the final exponentiation step of the reduced Tate pairing d...

  12. Computer-Based Instruction: A Background Paper on its Status, Cost/Effectiveness and Telecommunications Requirements.

    Science.gov (United States)

    Singh, Jai P.; Morgan, Robert P.

    In the slightly over twelve years since its inception, computer-based instruction (CBI) has shown the promise of being more cost-effective than traditional instruction for certain educational applications. Pilot experiments are underway to evaluate various CBI systems. Should these tests prove successful, a major problem confronting advocates of…

  13. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Science.gov (United States)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  14. The path toward HEP High Performance Computing

    Science.gov (United States)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  15. Computing High Accuracy Power Spectra with Pico

    CERN Document Server

    Fendt, William A

    2007-01-01

    This paper presents the second release of Pico (Parameters for the Impatient COsmologist). Pico is a general purpose machine learning code which we have applied to computing the CMB power spectra and the WMAP likelihood. For this release, we have made improvements to the algorithm as well as the data sets used to train Pico, leading to a significant improvement in accuracy. For the 9 parameter nonflat case presented here Pico can on average compute the TT, TE and EE spectra to better than 1% of cosmic standard deviation for nearly all $\\ell$ values over a large region of parameter space. Performing a cosmological parameter analysis of current CMB and large scale structure data, we show that these power spectra give very accurate 1 and 2 dimensional parameter posteriors. We have extended Pico to allow computation of the tensor power spectrum and the matter transfer function. Pico runs about 1500 times faster than CAMB at the default accuracy and about 250,000 times faster at high accuracy. Training Pico can be...

  16. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  17. Resources and costs for microbial sequence analysis evaluated using virtual machines and cloud computing.

    Directory of Open Access Journals (Sweden)

    Samuel V Angiuoli

    Full Text Available BACKGROUND: The widespread popularity of genomic applications is threatened by the "bioinformatics bottleneck" resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly. RESULTS: We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2, which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers. CONCLUSIONS: Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer invested

  18. Compact high performance spectrometers using computational imaging

    Science.gov (United States)

    Morton, Kenneth; Weisberg, Arel

    2016-05-01

    Compressive sensing technology can theoretically be used to develop low cost compact spectrometers with the performance of larger and more expensive systems. Indeed, compressive sensing for spectroscopic systems has been previously demonstrated using coded aperture techniques, wherein a mask is placed between the grating and a charge coupled device (CCD) and multiple measurements are collected with different masks. Although proven effective for some spectroscopic sensing paradigms (e.g. Raman), this approach requires that the signal being measured is static between shots (low noise and minimal signal fluctuation). Many spectroscopic techniques applicable to remote sensing are inherently noisy and thus coded aperture compressed sensing will likely not be effective. This work explores an alternative approach to compressed sensing that allows for reconstruction of a high resolution spectrum in sensing paradigms featuring significant signal fluctuations between measurements. This is accomplished through relatively minor changes to the spectrometer hardware together with custom super-resolution algorithms. Current results indicate that a potential overall reduction in CCD size of up to a factor of 4 can be attained without a loss of resolution. This reduction can result in significant improvements in cost, size, and weight of spectrometers incorporating the technology.

  19. The high cost of low-acuity ICU outliers.

    Science.gov (United States)

    Dahl, Deborah; Wojtal, Greg G; Breslow, Michael J; Holl, Randy; Huguez, Debra; Stone, David; Korpi, Gloria

    2012-01-01

    Direct variable costs were determined on each hospital day for all patients with an intensive care unit (ICU) stay in four Phoenix-area hospital ICUs. Average daily direct variable cost in the four ICUs ranged from $1,436 to $1,759 and represented 69.4 percent and 45.7 percent of total hospital stay cost for medical and surgical patients, respectively. Daily ICU cost and length of stay (LOS) were higher in patients with higher ICU admission acuity of illness as measured by the APACHE risk prediction methodology; 16.2 percent of patients had an ICU stay in excess of six days, and these LOS outliers accounted for 56.7 percent of total ICU cost. While higher-acuity patients were more likely to be ICU LOS outliers, 11.1 percent of low-risk patients were outliers. The low-risk group included 69.4 percent of the ICU population and accounted for 47 percent of all LOS outliers. Low-risk LOS outliers accounted for 25.3 percent of ICU cost and incurred fivefold higher hospital stay costs and mortality rates. These data suggest that severity of illness is an important determinant of daily resource consumption and LOS, regardless of whether the patient arrives in the ICU with high acuity or develops complications that increase acuity. The finding that a substantial number of long-stay patients come into the ICU with low acuity and deteriorate after ICU admission is not widely recognized and represents an important opportunity to improve patient outcomes and lower costs. ICUs should consider adding low-risk LOS data to their quality and financial performance reports.

  20. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  1. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  2. Ultra-high resolution computed tomography imaging

    Energy Technology Data Exchange (ETDEWEB)

    Paulus, Michael J. (Knoxville, TN); Sari-Sarraf, Hamed (Knoxville, TN); Tobin, Jr., Kenneth William (Harriman, TN); Gleason, Shaun S. (Knoxville, TN); Thomas, Jr., Clarence E. (Knoxville, TN)

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  3. Reducing High Absenteeism through Low-Cost Incentives.

    Science.gov (United States)

    North Chaplik, Barbara D.; Engel, Ross A.

    1984-01-01

    Describes a study of the effects of a low-cost incentive program--including daily, weekly, and monthly reinforcements such as attention, approval, and inexpensive awards--on the absenteeism of high-absence employees in an urban school district's transportation department. A 20-percent reduction in absenteeism was achieved. (TE)

  4. Low Cost Lithography Tool for High Brightness LED Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Andrew Hawryluk; Emily True

    2012-06-30

    The objective of this activity was to address the need for improved manufacturing tools for LEDs. Improvements include lower cost (both capital equipment cost reductions and cost-ofownership reductions), better automation and better yields. To meet the DOE objective of $1- 2/kilolumen, it will be necessary to develop these highly automated manufacturing tools. Lithography is used extensively in the fabrication of high-brightness LEDs, but the tools used to date are not scalable to high-volume manufacturing. This activity addressed the LED lithography process. During R&D and low volume manufacturing, most LED companies use contact-printers. However, several industries have shown that these printers are incompatible with high volume manufacturing and the LED industry needs to evolve to projection steppers. The need for projection lithography tools for LED manufacturing is identified in the Solid State Lighting Manufacturing Roadmap Draft, June 2009. The Roadmap states that Projection tools are needed by 2011. This work will modify a stepper, originally designed for semiconductor manufacturing, for use in LED manufacturing. This work addresses improvements to yield, material handling, automation and throughput for LED manufacturing while reducing the capital equipment cost.

  5. Opportunities and challenges of high-performance computing in chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Guest, M.F.; Kendall, R.A.; Nichols, J.A. [eds.] [and others

    1995-06-01

    The field of high-performance computing is developing at an extremely rapid pace. Massively parallel computers offering orders of magnitude increase in performance are under development by all the major computer vendors. Many sites now have production facilities that include massively parallel hardware. Molecular modeling methodologies (both quantum and classical) are also advancing at a brisk pace. The transition of molecular modeling software to a massively parallel computing environment offers many exciting opportunities, such as the accurate treatment of larger, more complex molecular systems in routine fashion, and a viable, cost-effective route to study physical, biological, and chemical `grand challenge` problems that are impractical on traditional vector supercomputers. This will have a broad effect on all areas of basic chemical science at academic research institutions and chemical, petroleum, and pharmaceutical industries in the United States, as well as chemical waste and environmental remediation processes. But, this transition also poses significant challenges: architectural issues (SIMD, MIMD, local memory, global memory, etc.) remain poorly understood and software development tools (compilers, debuggers, performance monitors, etc.) are not well developed. In addition, researchers that understand and wish to pursue the benefits offered by massively parallel computing are often hindered by lack of expertise, hardware, and/or information at their site. A conference and workshop organized to focus on these issues was held at the National Institute of Health, Bethesda, Maryland (February 1993). This report is the culmination of the organized workshop. The main conclusion: a drastic acceleration in the present rate of progress is required for the chemistry community to be positioned to exploit fully the emerging class of Teraflop computers, even allowing for the significant work to date by the community in developing software for parallel architectures.

  6. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  7. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  8. A high-performance, low-cost, leading edge discriminator

    Indian Academy of Sciences (India)

    S K Gupta; Y Hayashi; A Jain; S Karthikeyan; S Kawakami; K C Ravindran; S C Tonwar

    2005-08-01

    A high-performance, low-cost, leading edge discriminator has been designed with a timing performance comparable to state-of-the-art, commercially available discriminators. A timing error of 16 ps is achieved under ideal operating conditions. Under more realistic operating conditions the discriminator displays a timing error of 90 ps. It has an intrinsic double pulse resolution of 4 ns which is better than most commercial discriminators. A low-cost discriminator is an essential requirement of the GRAPES-3 experiment where a large number of discriminator channels are used.

  9. Achieving High Performance Distributed System: Using Grid, Cluster and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sunil Kr Singh

    2015-02-01

    Full Text Available To increase the efficiency of any task, we require a system that would provide high performance along with flexibilities and cost efficiencies for user. Distributed computing, as we are all aware, has become very popular over the past decade. Distributed computing has three major types, namely, cluster, grid and cloud. In order to develop a high performance distributed system, we need to utilize all the above mentioned three types of computing. In this paper, we shall first have an introduction of all the three types of distributed computing. Subsequently examining them we shall explore trends in computing and green sustainable computing to enhance the performance of a distributed system. Finally presenting the future scope, we conclude the paper suggesting a path to achieve a Green high performance distributed system using cluster, grid and cloud computing

  10. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  11. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  12. Low cost phantom for computed radiology; Objeto de teste de baixo custo para radiologia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Travassos, Paulo Cesar B.; Magalhaes, Luis Alexandre G., E-mail: pctravassos@ufrj.br [Universidade do Estado do Rio de Janeiro (IBRGA/UERJ), RJ (Brazil). Laboratorio de Ciencias Radiologicas; Augusto, Fernando M.; Sant' Yves, Thalis L.A.; Goncalves, Elicardo A.S. [Instituto Nacional de Cancer (INCA), Rio de Janeiro, RJ (Brazil); Botelho, Marina A. [Hospital Universitario Pedro Ernesto (UERJ), Rio de Janeiro, RJ (Brazil)

    2012-08-15

    This article presents the results obtained from a low cost phantom, used to analyze Computed Radiology (CR) equipment. The phantom was constructed to test a few parameters related to image quality, as described in [1-9]. Materials which can be easily purchased were used in the construction of the phantom, with total cost of approximately U$100.00. A bar pattern was placed only to verify the efficacy of the grids in the spatial resolution determination, and was not included in the budget because the data was acquired from the grids. (author)

  13. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  14. Computer-generated recall letters for underimmunized children: how cost-effective?

    Science.gov (United States)

    Lieu, T A; Black, S B; Ray, P; Schwalbe, J A; Lewis, E M; Lavetter, A; Morozumi, P A; Shinefield, H R

    1997-01-01

    To evaluate the effectiveness and cost effectiveness of computer-generated recall letters to parents of children overdue for immunizations. This randomized controlled trial included children of two facilities in a regional health maintenance organization. Parents of 20-month-olds who had not yet received a measles-mumps-rubella (MMR) immunization were identified via a computerized immunization tracking system. One half were mailed personalized letters that included the recommended immunization schedule and a request to call for an appointment; the other half served as a control group. Receipt of the MMR between 20 and 24 months of age was evaluated with the computerized tracking system. A telephone survey was conducted with parents whose children had not received the MMR by 24 months. Decision analysis was used to project the theoretical outcomes and costs of a recall letter policy for other populations. Among 20-month-old children 10% had not received the MMR; 289 families were included in the analysis. Of families who were mailed letters, 54% (82 of 153) received the MMR by 24 months of age, compared with 35% (47 of 136) of those in the control group (P = 0.001). The telephone survey was completed with 110 parents of children who still did not appear on the health plan computer as having received the MMR by 24 months. Fifteen percent said the child had received an immunization at an outside provider, and of the rest 62% said they had not been aware that an immunization was due. In the cost effectiveness analysis it was projected that recall letters would increase the immunization rate for the regional population of approximately 30000 children from 86% to 90% at a total cost of $5031 annually. The cost per additional child appropriately immunized was $4.04. In sensitivity analyses this cost effectiveness ratio varied depending on the baseline population coverage rate as well as the estimated effectiveness of recall letters. Computer-generated letters to recall

  15. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  16. Slovak High School Students' Attitudes toward Computers

    Science.gov (United States)

    Kubiatko, Milan; Halakova, Zuzana; Nagyova, Sona; Nagy, Tibor

    2011-01-01

    The pervasive involvement of information and communication technologies and computers in our daily lives influences changes of attitude toward computers. We focused on finding these ecological effects in the differences in computer attitudes as a function of gender and age. A questionnaire with 34 Likert-type items was used in our research. The…

  17. High speed and large scale scientific computing

    CERN Document Server

    Gentzsch, W; Joubert, GR

    2010-01-01

    Over the years parallel technologies have completely transformed main stream computing. This book deals with the issues related to the area of cloud computing and discusses developments in grids, applications and information processing, as well as e-science. It is suitable for computer scientists, IT engineers and IT managers.

  18. Techniques to Minimize State Transfer Cost for Dynamic Execution Offloading In Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    S.Rathnapriya

    2015-05-01

    Full Text Available The recent advancement in cloud computing in cloud computing is leading to and excessive growth of the mobile devices that can become powerful means for the information access and mobile applications. This introducing a latent technology called Mobile cloud computing. Smart phone device supports wide range of mobile applications which require high computational power, memory, storage and energy but these resources are limited in number so act as constraints in smart phone devices. With the integration of cloud computing and mobile applications it is possible to overcome these constraints by offloading the complex modules on cloud. These restrictions may be alleviated by computation offloading: sending heavy computations to resourceful servers and receiving the results from these servers. Many issues related to offloading have been investigated in the past decade.

  19. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    Energy Technology Data Exchange (ETDEWEB)

    Hardie, R.W.

    1982-02-01

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case.

  20. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    Energy Technology Data Exchange (ETDEWEB)

    Krstulovich, S.F.

    1986-11-12

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  1. Application of a single-board computer as a low cost pulse generator

    CERN Document Server

    Fedrizzi, Marcus

    2015-01-01

    A BeagleBone Black (BBB) single-board open-source computer was implemented as a low-cost fully programmable pulse generator. The pulse generator makes use of the BBB Programmable Real-Time Unit (PRU) subsystem to achieve a deterministic temporal resolution of 5 ns, an RMS jitter of 290 ps and a timebase stability on the order of 10 ppm. A python based software framework has also been developed to simplify the usage of the pulse generator.

  2. Implications of Using Computer-Based Training on System Readiness and Operating & Support Costs

    Science.gov (United States)

    2014-07-18

    Policy - 8 - Naval Postgraduate School Program Executive Office Integrated warfare System 5 ( PEO IWS5) provided a list of ships equipped with the AN/SQQ...on board both before and after implementation of CBT were considered. The initial list provided by PEO IWS5 included all ships of the CG-47, DD-963...Ownership Cost (TOC) Guidebook. Dhanjal, R., & Calis, G. (1999). Computer Based Training in the Steel Industry. Steel Times Vol. 227 No. 4, 130-131

  3. Optimizing high performance computing workflow for protein functional annotation.

    Science.gov (United States)

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  4. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Total Annual Loan Cost Rate Computations for... charges associated with this loan. These charges typically include principal, interest, closing costs... associated with this loan. These charges typically include principal, interest, closing costs, mortgage...

  5. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  6. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  7. Access to high cost medicines in Australia: ethical perspectives.

    Science.gov (United States)

    Lu, Christine Y; Macneill, Paul; Williams, Ken; Day, Ric

    2008-05-19

    Access to "high cost medicines" through Australia's Pharmaceutical Benefits Scheme (PBS) is tightly regulated. It is inherently difficult to apply any criteria-based system of control in a way that provides a fair balance between efficient use of limited resources for community needs and equitable individual access to care. We suggest, in relation to very high cost medicines, that the present arrangements be re-considered in order to overcome potential inequities. The biological agents for the treatment of rheumatoid arthritis are used as an example by which to discuss the ethical issues associated with the current scheme. Consideration of ethical aspects of the PBS and similar programs is important in order to achieve the fairest outcomes for individual patients, as well as for the community.

  8. Reducing annotation cost and uncertainty in computer-aided diagnosis through selective iterative classification

    Science.gov (United States)

    Riely, Amelia; Sablan, Kyle; Xiaotao, Thomas; Furst, Jacob; Raicu, Daniela

    2015-03-01

    Medical imaging technology has always provided radiologists with the opportunity to view and keep records of anatomy of the patient. With the development of machine learning and intelligent computing, these images can be used to create Computer-Aided Diagnosis (CAD) systems, which can assist radiologists in analyzing image data in various ways to provide better health care to patients. This paper looks at increasing accuracy and reducing cost in creating CAD systems, specifically in predicting the malignancy of lung nodules in the Lung Image Database Consortium (LIDC). Much of the cost in creating an accurate CAD system stems from the need for multiple radiologist diagnoses or annotations of each image, since there is rarely a ground truth diagnosis and even different radiologists' diagnoses of the same nodule often disagree. To resolve this issue, this paper outlines an method of selective iterative classification that predicts lung nodule malignancy by using multiple radiologist diagnoses only for cases that can benefit from them. Our method achieved 81% accuracy while costing only 46% of the method that indiscriminately used all annotations, which achieved a lower accuracy of 70%, while costing more.

  9. Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    OpenAIRE

    Fedak, Gilles

    2015-01-01

    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has ...

  10. Norplant's high cost may prohibit use in Title 10 clinics.

    Science.gov (United States)

    1991-04-01

    The article discusses the prohibitive cost of Norplant for the Title 10 low-income population served in public family planning clinics in the U.S. It is argued that it's unfair for U.S. users to pay $350 to Wyeth- Ayerst when another pharmaceutical company provides developing countries with Norplant at a cost of $14 - 23. Although the public sector and private foundations funded the development, it was explained that the company needs to recoup the investment in training and education. Medicaid and third party payers such as insurance companies will reimburse for the higher price, but if the public sector price is lowered, then the company would not make a profit and everyone would have argued for the reimbursement at the lower cost. It was suggested that a boycott of American Home Products, Wyeth-Ayerst's parent company, be made. Public family planning providers who are particularly low in funding reflect that their budget of $30,000 would only provide 85 users, and identified in this circumstance by drug abusers and multiple pregnancy women, and the need for teenagers remains unfulfilled. Another remarked that the client population served is 4700 with $54,000 in funding, which is already accounted for. The general trend of comments was that for low income women the cost is to high.

  11. A Component Architecture for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  12. The application of cloud computing to scientific workflows: a study of cost and performance.

    Science.gov (United States)

    Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S

    2013-01-28

    The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.

  13. High flexibility and low cost digital implementation for modern PWM strategies

    DEFF Research Database (Denmark)

    Mathe, Laszlo; Sera, Dezso; Kerekes, Tamas

    2011-01-01

    In this paper a new low cost technique for PWM strategy implementation is presented. The proposed technique does not require dedicated hardware PWM units, thus offering higher flexibility of use. Furthermore, the aforementioned method eases the digital implementation of modern modulation methods ...... with hardware PWM unit. The experimental results show that this new technique is suitable to replace traditional implementation methods with minimum computational overhead, with the benefit of high flexibility, lower cost and faster code development.......In this paper a new low cost technique for PWM strategy implementation is presented. The proposed technique does not require dedicated hardware PWM units, thus offering higher flexibility of use. Furthermore, the aforementioned method eases the digital implementation of modern modulation methods...

  14. Cost-effectiveness of screening for lung cancer with low-dose computed tomography: a systematic literature review.

    Science.gov (United States)

    Puggina, Anna; Broumas, Athanasios; Ricciardi, Walter; Boccia, Stefania

    2016-02-01

    On 31 December 2013, the US Preventive Services Task Force rated low-dose computed tomography (LDCT) for lung cancer screening as level 'B' recommendation. Yet, lung cancer screening implementation remains controversial, particularly when considering its cost-effectiveness. The aim of this work is to investigate the cost-effectiveness of LDCT screening program for lung cancer by performing a systematic literature review. We reviewed the published economic evaluations of LDCT in lung cancer screening. MEDLINE, ISI Web of Science and Cochrane databases were searched for literature retrieval up to 31 March 2015. Inclusion criteria included: studies reporting an original full economic evaluation; reports presenting the outcomes as Quality-Adjusted Life Years (QALYs) gained or as Life Years Gained. Nine economic evaluations met the inclusion criteria. All the cost-effectiveness analyses included high risk populations for lung cancer and compared the use of annual LDCT screening with no screening. Seven studies reported an incremental cost-effectiveness ratio below the threshold of US$ 100 000 per QALY gained. Cost-effectiveness of LDCT screening for lung cancer is an highly debatable issue. Currently available economic evaluations suggest the cost-effectiveness of LDCT for lung cancer screening compared with no screening and indicate that the implementation of LDCT should be considered when planning a national lung cancer screening program. Additional economic evaluations, especially from a societal perspective and in an EU-setting, are needed. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  15. Coverage Options for a Low cost, High Resolution Optical Constellation

    OpenAIRE

    Price, M E; Levett, W.; Graham, K.

    2003-01-01

    This paper presents the range of coverage options available to TopSat like small satellites, both singly and in a small constellation. TopSat is a low-cost, high resolution and image quality, optical small satellite, due for launch in October 2004. In particular, the paper considers the use of tuned, repeat ground track orbits to improve coverage for selected ground targets, at the expense of global coverage. TopSat is designed to demonstrate the capabilities of small satellites for high valu...

  16. Context-aware computing-based reducing cost of service method in resource discovery and interaction

    Institute of Scientific and Technical Information of China (English)

    TANG Shan-cheng; HOU Yi-bin

    2004-01-01

    Reducing cost of service is an important goal for resource discovery and interaction technologies. The shortcomings of transhipment-method and hibernation-method are to increase holistic cost of service and to slower resource discovery respectively. To overcome these shortcomings, a context-aware computing-based method is developed. This method, firstly,analyzes the courses of devices using resource discovery and interaction technologies to identify some types of context related to reducing cost of service, then, chooses effective methods such as stopping broadcast and hibernation to reduce cost of service according to information supplied by the context but not the transhipment-method's simple hibernations. The results of experiments indicate that under the worst condition this method overcomes the shortcomings of transhipment-method, makes the "poor" devices hibernate longer than hibernation-method to reduce cost of service more effectively, and discovers resources faster than hibernation-method; under the best condition it is far better than hibernation-method in all aspects.

  17. Matrix element method for high performance computing platforms

    Science.gov (United States)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  18. Patents associated with high-cost drugs in Australia.

    Directory of Open Access Journals (Sweden)

    Andrew F Christie

    Full Text Available Australia, like most countries, faces high and rapidly-rising drug costs. There are longstanding concerns about pharmaceutical companies inappropriately extending their monopoly position by "evergreening" blockbuster drugs, through misuse of the patent system. There is, however, very little empirical information about this behaviour. We fill the gap by analysing all of the patents associated with 15 of the costliest drugs in Australia over the last 20 years. Specifically, we search the patent register to identify all the granted patents that cover the active pharmaceutical ingredient of the high-cost drugs. Then, we classify the patents by type, and identify their owners. We find a mean of 49 patents associated with each drug. Three-quarters of these patents are owned by companies other than the drug's originator. Surprisingly, the majority of all patents are owned by companies that do not have a record of developing top-selling drugs. Our findings show that a multitude of players seek monopoly control over innovations to blockbuster drugs. Consequently, attempts to control drug costs by mitigating misuse of the patent system are likely to miss the mark if they focus only on the patenting activities of originators.

  19. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their

  20. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their ef

  1. Software Synthesis for High Productivity Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bodik, Rastislav [Univ. of Washington, Seattle, WA (United States)

    2010-09-01

    Over the three years of our project, we accomplished three key milestones: We demonstrated how ideas from generative programming and software synthesis can help support the development of bulk-synchronous distributed memory kernels. These ideas are realized in a new language called MSL, a C-like language that combines synthesis features with high level notations for array manipulation and bulk-synchronous parallelism to simplify the semantic analysis required for synthesis. We also demonstrated that these high level notations map easily to low level C code and show that the performance of this generated code matches that of handwritten Fortran. Second, we introduced the idea of solver-aided domain-specific languages (SDSLs), which are an emerging class of computer-aided programming systems. SDSLs ease the construction of programs by automating tasks such as verification, debugging, synthesis, and non-deterministic execution. SDSLs are implemented by translating the DSL program into logical constraints. Next, we developed a symbolic virtual machine called Rosette, which simplifies the construction of such SDSLs and their compilers. We have used Rosette to build SynthCL, a subset of OpenCL that supports synthesis. Third, we developed novel numeric algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. We achieved progress in three aspects of this problem. First we determined lower bounds on communication. Second, we compared these lower bounds to widely used versions of these algorithms, and noted that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identified or invented new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrated large speed-ups in theory and practice.

  2. The High Value CVT Concept--Cost Effective and Powerful

    Institute of Scientific and Technical Information of China (English)

    A. Englisch,; A. Teubert; A. Gotz; E. Muller; E. Simon; B. Walter; A. Baumgartner

    2011-01-01

    Based on the comprehensive comparison of vehicle performance in economy,engine power,driving smoothness,and efficiency cost as well as pollutant emission etc,the paper discussed the high value CVT concept from an angle of the cost effective and powerful for vehicle.In the paper,it researched the related technical detail in CVT.By means of realizing the continuous change in transmission ratio,it could obtain the optimal matching between transmission system and engine operating mode,and enhance the characteristic of fuel oil in economy,and also improve the convenience in manipulation for driver and make passenger comfortable.For easy to understand the concept,the paper made the comparison analysis in many aspects such as performance,transmission specification,high value CVT hybrid,orifice torque sensor,hydraulic system,high value CVT em,new chain portfolio and assessment of the high value CVT on the NEDC.Finally it showed the potential advantages of CVT technology development,and proposed future developing trends to realize technical scheme of high value CVT.

  3. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  4. High Fidelity Adiabatic Quantum Computation via Dynamical Decoupling

    CERN Document Server

    Quiroz, Gregory

    2012-01-01

    We introduce high-order dynamical decoupling strategies for open system adiabatic quantum computation. Our numerical results demonstrate that a judicious choice of high-order dynamical decoupling method, in conjunction with an encoding which allows computation to proceed alongside decoupling, can dramatically enhance the fidelity of adiabatic quantum computation in spite of decoherence.

  5. A High-performance Low Cost Inverse Integer Transform Architecture for AVS Video Standard

    Institute of Scientific and Technical Information of China (English)

    LI Yu-fei; WANG Qin; FU Yu-zhuo

    2008-01-01

    A high-performance, low cost inverse integer transform architecture for advanced video standard (AVS) video coding standard was presented. An 8×8 inverse integer transform is required in AVS video system which is compute-intensive. A hardware transform is inevitable to compute the transform for the real-time ap-plication. Compared with the 4×4 transform for H.264/AVC, the 8×8 integer transform is much more complex and the coefficient in the inverse transform matrix Ts is not inerratic as that in H.264/AVC. Dividing the Ts into matrix S8 and R8, the proposed architecture is implemented with the adders and the specific CSA-trees instead of multipliers, which are area and time consuming. The architecture obtains the data processing rate up to 8 pixels per-cycle at a low cost of area. Synthesized to TSMC 0.18 μm COMS process, the architecture attains the operating frequency of 300 MHz at cost of 34 252 gates with a 2-stage pipeline scheme. A reusable scheme is also introduced for the area optimization, which results in the operating frequency of 143 MHz at cost of only 19 758 gates.

  6. Raspberry Pi- A Small, Powerful, Cost Effective and Efficient Form Factor Computer: A Review

    Directory of Open Access Journals (Sweden)

    Anand Nayyar

    2015-12-01

    Full Text Available Raspberry Pi, an efficient and cost effective credit card sized computer comes under light of sun by United Kingdom-Raspberry Pi foundation with the aim to enlighten and empower computer science teaching in schools and other developing countries. Since its inception, various open source communities have contributed tons towards open source apps, operating systems and various other small form factor computers similar to Raspberry Pi. Till date, researchers, hobbyists and other embedded systems enthusiast across the planet are making amazing projects using Pi which looks unbelievable and have out-of-the-box implementation. Raspberry Pi since its launch is regularly under constant development cum improvement both in terms of hardware and software which in-turn making Pi a “Full Fledged Computer” with possibility to be considered for almost all computing intensive tasks. The aim of this research paper is to enlighten regarding what is Raspberry Pi, Why Raspberry Pi is Required, Generations of Raspberry Pi, operating systems available till date in Pi and other hardware available for project development. This paper will lay foundation for various open source communities across planet to become aware and use this credit card sized computer for making projects ranging from day to day activities to scientific and complex applications development.

  7. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E [ORNL; Schuman, Catherine D [ORNL; Young, Steven R [ORNL; Patton, Robert M [ORNL; Spedalieri, Federico [University of Southern California, Information Sciences Institute; Liu, Jeremy [University of Southern California, Information Sciences Institute; Yao, Ke-Thia [University of Southern California, Information Sciences Institute; Rose, Garrett [University of Tennessee (UT); Chakma, Gangotree [University of Tennessee (UT)

    2016-01-01

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

  8. A cost of sexual attractiveness to high-fitness females.

    Directory of Open Access Journals (Sweden)

    Tristan A F Long

    2009-12-01

    Full Text Available Adaptive mate choice by females is an important component of sexual selection in many species. The evolutionary consequences of male mate preferences, however, have received relatively little study, especially in the context of sexual conflict, where males often harm their mates. Here, we describe a new and counterintuitive cost of sexual selection in species with both male mate preference and sexual conflict via antagonistic male persistence: male mate choice for high-fecundity females leads to a diminished rate of adaptive evolution by reducing the advantage to females of expressing beneficial genetic variation. We then use a Drosophila melanogaster model system to experimentally test the key prediction of this theoretical cost: that antagonistic male persistence is directed toward, and harms, intrinsically higher-fitness females more than it does intrinsically lower-fitness females. This asymmetry in male persistence causes the tails of the population's fitness distribution to regress towards the mean, thereby reducing the efficacy of natural selection. We conclude that adaptive male mate choice can lead to an important, yet unappreciated, cost of sex and sexual selection.

  9. High Performance, Low Cost Hydrogen Generation from Renewable Energy

    Energy Technology Data Exchange (ETDEWEB)

    Ayers, Katherine [Proton OnSite; Dalton, Luke [Proton OnSite; Roemer, Andy [Proton OnSite; Carter, Blake [Proton OnSite; Niedzwiecki, Mike [Proton OnSite; Manco, Judith [Proton OnSite; Anderson, Everett [Proton OnSite; Capuano, Chris [Proton OnSite; Wang, Chao-Yang [Penn State University; Zhao, Wei [Penn State University

    2014-02-05

    Renewable hydrogen from proton exchange membrane (PEM) electrolysis is gaining strong interest in Europe, especially in Germany where wind penetration is already at critical levels for grid stability. For this application as well as biogas conversion and vehicle fueling, megawatt (MW) scale electrolysis is required. Proton has established a technology roadmap to achieve the necessary cost reductions and manufacturing scale up to maintain U.S. competitiveness in these markets. This project represents a highly successful example of the potential for cost reduction in PEM electrolysis, and provides the initial stack design and manufacturing development for Proton’s MW scale product launch. The majority of the program focused on the bipolar assembly, from electrochemical modeling to subscale stack development through prototyping and manufacturing qualification for a large active area cell platform. Feasibility for an advanced membrane electrode assembly (MEA) with 50% reduction in catalyst loading was also demonstrated. Based on the progress in this program and other parallel efforts, H2A analysis shows the status of PEM electrolysis technology dropping below $3.50/kg production costs, exceeding the 2015 target.

  10. Design of a Low-Cost Single-Board Computer System for Use In Low-Earth Orbit Small Satellite Missions

    OpenAIRE

    Milani, Dino

    1996-01-01

    A single-board computer system created specifically to meet the demands of a new generation of small satellite missions is being designed, built and tested by students at the University of New Hampshire. The Satellite Single-Board Computer (SSBC) is an Intel 80C186 based system that is qualified for explicit use in low-earth orbit missions. The SSBC serves as a low-cost, high-quality alternative to commercially available systems which are usually very costly and designed for much harsher spac...

  11. Design and Build of an Electrical Machines’ High Speed Measurement System at Low Cost

    Directory of Open Access Journals (Sweden)

    Constantinos C. Kontogiannis

    2014-01-01

    Full Text Available The principal objective of this paper is to demonstrate the capability of high speed measurement and acquisition equipment design and build in the laboratory at a very low cost. The presented architecture employees highly integrated market components eliminating thus the complexity of the hardware and software stack. The key element of the proposed system is a Hi-Speed USB to Serial/FIFO development module that is provided with full software and driver support for most popular operating systems. This module takes over every single task needed to get the data from the A/D to the user software gluelessly and transparently, solving this way the most difficult problem in data acquisition systems which is the fast and reliable communication with a host computer. Other ideas tested and included in this document offer Hall Effect measuring solutions using some excellent features and very low cost ICs widely available on the market today.

  12. Development and Use of the Life Cycle Cost in Design Computer Program (LCCID).

    Science.gov (United States)

    1985-11-01

    34..... ........ .... .-. * . -. . . ..-. .%- ’ . Signing Off the Computer ~Once the program is complete: U: $OFF CR Iwill log you off the Harris system.] --. 2 .9.... o . 17...Values for this Alternative by Title 52 7 7xi SPCF ANUA VALUE S = Define / Change Annual Values D = Delete an Annual Value . ’ ’."<cr> = exit SPECIFY...causes printer to begin logging all text coming to screen. .4 64-’. . ~~64 2. I. . .••.. ,... . . . . o. * *, . *% - - . . * -, . LIFE CYCLE COST ANALYSIS

  13. Cost Optimization Technique of Task Allocation in Heterogeneous Distributed Computing System

    Directory of Open Access Journals (Sweden)

    Faizul Navi Khan

    2013-11-01

    Full Text Available A Distributed Computing System (DCS is a network of workstations, personal computer and /or other computing systems. Such system may be heterogeneous in the sense that the computing nodes may have different speeds and memory capacities. A DCS accepts tasks from users and executes different modules of these tasks on various nodes of the system. Task allocation in a DCS is a common problem and a good number of task allocation algorithms have been proposed in the literature. In such environment an application runs in a DCS can be accessible on every node present within the DCS. In such cases if number of tasks is less than or equal to available processors in the DCS, we can assign these task without any trouble. But this allocation becomes complicated when numbers of tasks are greater than the numbers of processors. The problem of task allocation for processing of ‘m’ tasks to ‘n’ processors (m>n in a DCS is addressed here through a new modified tasks allocation technique. The model, presented in this paper allocates the tasks to the processor of different processing capacity to increase the performance of the DCS. The technique presented in this paper is based on the consideration of processing cost of the task to the processors. We have tried a new technique to assign all the tasks as per the required availability of processors and their processing capacity so that all the tasks of application get execute in the DCS.

  14. Proceedings CSR 2010 Workshop on High Productivity Computations

    CERN Document Server

    Ablayev, Farid; Vasiliev, Alexander; 10.4204/EPTCS.52

    2011-01-01

    This volume contains the proceedings of the Workshop on High Productivity Computations (HPC 2010) which took place on June 21-22 in Kazan, Russia. This workshop was held as a satellite workshop of the 5th International Computer Science Symposium in Russia (CSR 2010). HPC 2010 was intended to organize the discussions about high productivity computing means and models, including but not limited to high performance and quantum information processing.

  15. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  16. High-Performance Cloud Computing: A View of Scientific Applications

    CERN Document Server

    Vecchiola, Christian; Buyya, Rajkumar

    2009-01-01

    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure...

  17. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  18. The costs and cost-efficiency of providing food through schools in areas of high food insecurity.

    Science.gov (United States)

    Gelli, Aulo; Al-Shaiba, Najeeb; Espejo, Francisco

    2009-03-01

    The provision of food in and through schools has been used to support the education, health, and nutrition of school-aged children. The monitoring of financial inputs into school health and nutrition programs is critical for a number of reasons, including accountability, transparency, and equity. Furthermore, there is a gap in the evidence on the costs, cost-efficiency, and cost-effectiveness of providing food through schools, particularly in areas of high food insecurity. To estimate the programmatic costs and cost-efficiency associated with providing food through schools in food-insecure, developing-country contexts, by analyzing global project data from the World Food Programme (WFP). Project data, including expenditures and number of schoolchildren covered, were collected through project reports and validated through WFP Country Office records. Yearly project costs per schoolchild were standardized over a set number of feeding days and the amount of energy provided by the average ration. Output metrics, such as tonnage, calories, and micronutrient content, were used to assess the cost-efficiency of the different delivery mechanisms. The average yearly expenditure per child, standardized over a 200-day on-site feeding period and an average ration, excluding school-level costs, was US$21.59. The costs varied substantially according to choice of food modality, with fortified biscuits providing the least costly option of about US$11 per year and take-home rations providing the most expensive option at approximately US$52 per year. Comparisons across the different food modalities suggested that fortified biscuits provide the most cost-efficient option in terms of micronutrient delivery (particularly vitamin A and iodine), whereas on-site meals appear to be more efficient in terms of calories delivered. Transportation and logistics costs were the main drivers for the high costs. The choice of program objectives will to a large degree dictate the food modality

  19. Intro - High Performance Computing for 2015 HPC Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Klitsner, Tom [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  20. A high-throughput bioinformatics distributed computing platform

    OpenAIRE

    Keane, Thomas M; Page, Andrew J.; McInerney, James O; Naughton, Thomas J.

    2005-01-01

    In the past number of years the demand for high performance computing has greatly increased in the area of bioinformatics. The huge increase in size of many genomic databases has meant that many common tasks in bioinformatics are not possible to complete in a reasonable amount of time on a single processor. Recently distributed computing has emerged as an inexpensive alternative to dedicated parallel computing. We have developed a general-purpose distributed computing platform ...

  1. Condor-COPASI: high-throughput computing for biochemical networks

    OpenAIRE

    Kent Edward; Hoops Stefan; Mendes Pedro

    2012-01-01

    Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary experti...

  2. Computational sensing of herpes simplex virus using a cost-effective on-chip microscope

    KAUST Repository

    Ray, Aniruddha

    2017-07-03

    Caused by the herpes simplex virus (HSV), herpes is a viral infection that is one of the most widespread diseases worldwide. Here we present a computational sensing technique for specific detection of HSV using both viral immuno-specificity and the physical size range of the viruses. This label-free approach involves a compact and cost-effective holographic on-chip microscope and a surface-functionalized glass substrate prepared to specifically capture the target viruses. To enhance the optical signatures of individual viruses and increase their signal-to-noise ratio, self-assembled polyethylene glycol based nanolenses are rapidly formed around each virus particle captured on the substrate using a portable interface. Holographic shadows of specifically captured viruses that are surrounded by these self-assembled nanolenses are then reconstructed, and the phase image is used for automated quantification of the size of each particle within our large field-of-view, ~30 mm2. The combination of viral immuno-specificity due to surface functionalization and the physical size measurements enabled by holographic imaging is used to sensitively detect and enumerate HSV particles using our compact and cost-effective platform. This computational sensing technique can find numerous uses in global health related applications in resource-limited environments.

  3. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej

    2014-06-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  4. Progress and Challenges in High Performance Computer Technology

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Yong Dou; Qing-Feng Hu

    2006-01-01

    High performance computers provide strategic computing power in the construction of national economy and defense, and become one of symbols of the country's overall strength. Over 30 years, with the supports of governments, the technology of high performance computers is in the process of rapid development, during which the computing performance increases nearly 3 million times and the processors number expands over 10 hundred thousands times. To solve the critical issues related with parallel efficiency and scalability, scientific researchers pursued extensive theoretical studies and technical innovations. The paper briefly looks back the course of building high performance computer systems both at home and abroad,and summarizes the significant breakthroughs of international high performance computer technology. We also overview the technology progress of China in the area of parallel computer architecture, parallel operating system and resource management,parallel compiler and performance optimization, environment for parallel programming and network computing. Finally, we examine the challenging issues, "memory wall", system scalability and "power wall", and discuss the issues of high productivity computers, which is the trend in building next generation high performance computers.

  5. High-temperature superconducting transformer performance, cost, and market evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Dirks, J.A.; Dagle, J.E.; DeSteese, J.G.; Huber, H.D.; Smith, S.A.; Currie, J.W. [Pacific Northwest Lab., Richland, WA (United States); Merrick, S.B. [Westinghouse Hanford Co., Richland, WA (United States); Williams, T.A. [National Renewable Energy Lab., Golden, CO (United States)

    1993-09-01

    Recent laboratory breakthroughs in high-temperature superconducting (HTS) materials have stimulated both the scientific community and general public with questions regarding how these materials can be used in practical applications. While there are obvious benefits from using HTS materials (most notably the potential for reduced energy losses in the conductors), a number of issues (such as overall system energy losses, cost, and reliability) may limit applications of HTS equipment, even if the well known materials problems are solved. This study examined the future application potential of HTS materials to power transformers. This study effort was part of a US Department of Energy (DOE) Office of Energy Storage and Distribution (OESD) research program, Superconductivity Technology for Electric Power Systems (STEPS). The study took a systems perspective to gain insights to help guide DOE in managing research designed to realize the vision of HTS applications. Specific objectives of the study were as follows: to develop an understanding of the fundamental HTS transformer design issues that can provide guidance for developing practical devices of interest to the electric utility industry; to identify electric utility requirements for HTS transformers and to evaluate the potential for developing a commercial market; to evaluate the market potential and national benefits for HTS transformers that could be achieved by a successful HTS development program; to develop an integrated systems analysis framework, which can be used to support R&D planning by DOE, by identifying how various HTS materials characteristics impact the performance, cost, and national benefits of the HTS application.

  6. A Novel Cost Based Model for Energy Consumption in Cloud Computing

    Science.gov (United States)

    Horri, A.; Dastghaibyfard, Gh.

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. PMID:25705716

  7. A Novel Cost Based Model for Energy Consumption in Cloud Computing

    Directory of Open Access Journals (Sweden)

    A. Horri

    2015-01-01

    Full Text Available Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.

  8. A novel cost based model for energy consumption in cloud computing.

    Science.gov (United States)

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.

  9. Phase Transition in Computing Cost of Overconstrained NP-Complete 3-SAT Problems

    Science.gov (United States)

    Woodson, Adam; O'Donnell, Thomas; Maniloff, Peter

    2002-03-01

    Many intractable, NP-Complete problems such as Traveling Salesmen (TSP) and 3-Satisfiability (3-Sat) which arise in hundreds of computer science, industrial and commercial applications, are now known to exhibit phase transitions in computational cost. While these problems appear to not have any structure which would make them amenable to attack with quantum computing, their critical behavior may allow physical insights derived from statistical mechanics and critical theory to shed light on these computationally ``hardest" of problems. While computational theory indicates that ``the intractability of the NP-Complete class resides solely in the exponential growth of the possible solutions" with the number of variables, n, the present work instead investigates the complex patterns of ``overlap" amongst 3-SAT clauses (their combined effects) when n-tuples of these act in succession to reduce the space of valid solutions. An exhaustive-search algorithm was used to eliminate `bad' states from amongst the `good' states residing within the spaces of all 2^n--possible solutions of randomly generated 3-Sat problems. No backtracking nor optimization heuristics were employed, nor was problem structure exploited (i.e., phtypical cases were generated), and the (k=3)-Sat propositional logic problems generated were in standard, conjunctive normal form (CNF). Each problem had an effectively infinite number of clauses, m (i.e., with r = m/n >= 10), to insure every problem would not be satisfiable (i.e. that each would fail), and duplicate clauses were not permitted. This process was repeated for each of several low values of n (i.e., 4 animal populations.

  10. Mine Cost Can Hardly Be Lowered Due To High Percentage Of Inflexible Costs For Lead & Zinc Mine Enterprises

    Institute of Scientific and Technical Information of China (English)

    2016-01-01

    At the 2016(11th)Shanghai Lead&Zinc Summit,Lian Chuanshuang from Tibet Huayu Mining explained the current operation condition of domestic mine enterprises.At the Summit,he pointed out that currently mine cost could hardly be reduced,the main reason is too high percentage of inflexible cost.

  11. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  12. Surgical site infections : how high are the costs?

    NARCIS (Netherlands)

    Broex, E. C. J.; van Asselt, A. D. I.; Bruggeman, C. A.; van Tiel, F. H.

    2009-01-01

    There is an increased interest in prevention of nosocomial. infections and in the potential, savings in healthcare costs. The aim of this review of recent studies on surgical site infections (SSIs) was to compare methods of cost research and magnitudes of costs due to SSI. The studies reviewed diffe

  13. Surgical site infections : how high are the costs?

    NARCIS (Netherlands)

    Broex, E. C. J.; van Asselt, A. D. I.; Bruggeman, C. A.; van Tiel, F. H.

    There is an increased interest in prevention of nosocomial. infections and in the potential, savings in healthcare costs. The aim of this review of recent studies on surgical site infections (SSIs) was to compare methods of cost research and magnitudes of costs due to SSI. The studies reviewed

  14. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: A high performance computing framework

    Science.gov (United States)

    Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros

    2012-10-01

    We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

  15. Bridging the Silos of Service Delivery for High-Need, High-Cost Individuals.

    Science.gov (United States)

    Sherry, Melissa; Wolff, Jennifer L; Ballreich, Jeromie; DuGoff, Eva; Davis, Karen; Anderson, Gerard

    2016-12-01

    Health care reform efforts that emphasize value have increased awareness of the importance of nonmedical factors in achieving better care, better health, and lower costs in the care of high-need, high-cost individuals. Programs that care for socioeconomically disadvantaged, high-need, high-cost individuals have achieved promising results in part by bridging traditional service delivery silos. This study examined 5 innovative community-oriented programs that are successfully coordinating medical and nonmedical services to identify factors that stimulate and sustain community-level collaboration and coordinated care across silos of health care, public health, and social services delivery. The authors constructed a conceptual framework depicting community health systems that highlights 4 foundational factors that facilitate community-oriented collaboration: flexible financing, shared leadership, shared data, and a strong shared vision of commitment toward delivery of person-centered care.

  16. High resolution, low cost solar cell contact development

    Science.gov (United States)

    Mardesich, N.

    1981-01-01

    The MIDFILM cell fabrication and encapsulation processes were demonstrated as a means of applying low-cost solar cell collector metallization. The average cell efficiency of 12.0 percent (AM1, 28 C) was achieved with fritted silver metallization with a demonstration run of 500 starting wafers. A 98 percent mechanical yield and 80 percent electrical yield were achieved through the MIDFILM process. High series resistance was responsible for over 90 percent of the electrical failures and was the major factor causing the low average cell efficiency. Environmental evaluations suggest that the MIDFILM cells do not degrade. A slight degradation in power was experienced in the MIDFILM minimodules when the AMP Solarlok connector delaminated during the environmental testing.

  17. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational...... thinking. We present two main theses on which the subject is based, and we present the included knowledge areas and didactical design principles. Finally we summarize the status and future plans for the subject and related development projects....

  18. Offshore compression system design for low cost high and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Carlos J. Rocha de O.; Carrijo Neto, Antonio Dias; Cordeiro, Alexandre Franca [Chemtech Engineering Services and Software Ltd., Rio de Janeiro, RJ (Brazil). Special Projects Div.], Emails: antonio.carrijo@chemtech.com.br, carlos.rocha@chemtech.com.br, alexandre.cordeiro@chemtech.com.br

    2010-07-01

    In the offshore oil fields, the oil streams coming from the wells usually have significant amounts of gas. This gas is separated at low pressure and has to be compressed to the export pipeline pressure, usually at high pressure to reduce the needed diameter of the pipelines. In the past, this gases where flared, but nowadays there are a increasing pressure for the energy efficiency improvement of the oil rigs and the use of this gaseous fraction. The most expensive equipment of this kind of plant are the compression and power generation systems, being the second a strong function of the first, because the most power consuming equipment are the compressors. For this reason, the optimization of the compression system in terms of efficiency and cost are determinant to the plant profit. The availability of the plants also have a strong influence in the plant profit, specially in gas fields where the products have a relatively low aggregated value, compared to oil. Due this, the third design variable of the compression system becomes the reliability. As high the reliability, larger will be the plant production. The main ways to improve the reliability of compression system are the use of multiple compression trains in parallel, in a 2x50% or 3x50% configuration, with one in stand-by. Such configurations are possible and have some advantages and disadvantages, but the main side effect is the increase of the cost. This is the offshore common practice, but that does not always significantly improve the plant availability, depending of the previous process system. A series arrangement and a critical evaluation of the overall system in some cases can provide a cheaper system with equal or better performance. This paper shows a case study of the procedure to evaluate a compression system design to improve the reliability but without extreme cost increase, balancing the number of equipment, the series or parallel arrangement, and the driver selection. Two cases studies will be

  19. Scilab software as an alternative low-cost computing in solving the linear equations problem

    Science.gov (United States)

    Agus, Fahrul; Haviluddin

    2017-02-01

    Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.

  20. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  1. Use of several Cloud Computing approaches for climate modelling: performance, costs and opportunities

    Science.gov (United States)

    Perez Montes, Diego A.; Añel Cabanelas, Juan A.; Wallom, David C. H.; Arribas, Alberto; Uhe, Peter; Caderno, Pablo V.; Pena, Tomas F.

    2017-04-01

    Cloud Computing is a technological option that offers great possibilities for modelling in geosciences. We have studied how two different climate models, HadAM3P-HadRM3P and CESM-WACCM, can be adapted in two different ways to run on Cloud Computing Environments from three different vendors: Amazon, Google and Microsoft. Also, we have evaluated qualitatively how the use of Cloud Computing can affect the allocation of resources by funding bodies and issues related to computing security, including scientific reproducibility. Our first experiments were developed using the well known ClimatePrediction.net (CPDN), that uses BOINC, over the infrastructure from two cloud providers, namely Microsoft Azure and Amazon Web Services (hereafter AWS). For this comparison we ran a set of thirteen month climate simulations for CPDN in Azure and AWS using a range of different virtual machines (VMs) for HadRM3P (50 km resolution over South America CORDEX region) nested in the global atmosphere-only model HadAM3P. These simulations were run on a single processor and took between 3 and 5 days to compute depending on the VM type. The last part of our simulation experiments was running WACCM over different VMS on the Google Compute Engine (GCE) and make a comparison with the supercomputer (SC) Finisterrae1 from the Centro de Supercomputacion de Galicia. It was shown that GCE gives better performance than the SC for smaller number of cores/MPI tasks but the model throughput shows clearly how the SC performance is better after approximately 100 cores (related with network speed and latency differences). From a cost point of view, Cloud Computing moves researchers from a traditional approach where experiments were limited by the available hardware resources to monetary resources (how many resources can be afforded). As there is an increasing movement and recommendation for budgeting HPC projects on this technology (budgets can be calculated in a more realistic way) we could see a shift on

  2. High performance of low cost soft magnetic materials

    Indian Academy of Sciences (India)

    Josefina M Silveyra; Emília Illeková; Marco Coïsson; Federica Celegato; Franco Vinai; Paola Tiberto; Javier A Moya; Victoria J Cremaschi

    2011-12-01

    The consistent interest in supporting research and development of magnetic materials during the last century is revealed in their steadily increasing market. In this work, the soft magnetic nanocrystalline FINEMET alloy was prepared with commercial purity raw materials and compared for the first time with the generally studied high purity one. The exhaustive characterization covers several diverse techniques: X-ray diffraction, Mössbauer spectroscopy, differential scanning calorimetry, differential thermal analysis and magnetic properties. In addition, a brief economic analysis is presented. For the alloys annealed at 813 K, the value of the grain size was 16 nm with 19.5% of Si, the coercivity was 0.30 A m-1 while the saturation was 1.2 T. These results prove that structural, magnetic and thermal properties of this material are very close to the expensive high purity FINEMET alloy, while a cost reduction of almost 98% seems highly attractive for laboratories and industry. The analysis should be useful not only for the production of FINEMETs, but for other type of systems with similar constitutive elements as well, including soft and hard magnetic materials.

  3. Low cost, high temperature membranes for PEM fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-08-15

    This report details the results of a project to develop novel, low-cost high temperature membranes specifically for automotive fuel cell use. The specific aim of the project was to determine whether a polyaromatic hydrocarbon membrane could be developed that would give a performance (0.68V at 500 mAcm{sub -2}) competitive with an established perfluoronated sulfonic acid (PSA) membrane in a fuel cell at 120{sup o}C and relative humidity of less than 50%. The novel approach used in this project was to increase the concentration of sulphonic groups to a useful level without dissolution by controlling the molecular structure of the membrane through the design of the monomer repeat unit. The physicochemical properties of 70 polymers synthesised in order to determine the effects of controlled sequence distribution were identified using an array of analytical techniques. Appropriate membranes were selected for fuel cell testing and fabricated into membrane electrode assemblies. Most of the homopolymers tested were able to withstand low humidity environments without immediate catastrophic failure and some showed promise from accelerated durability results. The properties of a simple starting polymer structure were found to be enhanced by doping with sulphonated copper phthalocyanine, resulting in high temperature capacity from a potential cheap, simple and scaleable process. The accelerated and long-term durability of such a doped polymer membrane showed that polyaromatics could easily outperform fluoropolymers under high temperature (120{sup o}C) operating conditions.

  4. Strength and Reliability of Wood for the Components of Low-cost Wind Turbines: Computational and Experimental Analysis and Applications

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon; Freere, Peter; Sharma, Ranjan

    2009-01-01

    of experiments and computational investigations. Low cost testing machines have been designed, and employed for the systematic analysis of different sorts of Nepali wood, to be used for the wind turbine construction. At the same time, computational micromechanical models of deformation and strength of wood......This paper reports the latest results of the comprehensive program of experimental and computational analysis of strength and reliability of wooden parts of low cost wind turbines. The possibilities of prediction of strength and reliability of different types of wood are studied in the series...

  5. Capital cost: high and low sulfur coal plants-1200 MWe. [High sulfur coal

    Energy Technology Data Exchange (ETDEWEB)

    1977-01-01

    This Commercial Electric Power Cost Study for 1200 MWe (Nominal) high and low sulfur coal plants consists of three volumes. The high sulfur coal plant is described in Volumes I and II, while Volume III describes the low sulfur coal plant. The design basis and cost estimate for the 1232 MWe high sulfur coal plant is presented in Volume I, and the drawings, equipment list and site description are contained in Volume II. The reference design includes a lime flue gas desulfurization system. A regenerative sulfur dioxide removal system using magnesium oxide is also presented as an alternate in Section 7 Volume II. The design basis, drawings and summary cost estimate for a 1243 MWe low sulfur coal plant are presented in Volume III. This information was developed by redesigning the high sulfur coal plant for burning low sulfur sub-bituminous coal. These coal plants utilize a mechanical draft (wet) cooling tower system for condenser heat removal. Costs of alternate cooling systems are provided in Report No. 7 in this series of studies of costs of commercial electrical power plants.

  6. High performance computing network for cloud environment using simulators

    CERN Document Server

    Singh, N Ajith

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional data center or had to design a new application for the cloud computing environment. The security issue, latency, fault tolerance are some parameter which we need to keen care before deploying, all this we only know after deploying but by using simulation we can do the experiment before deploying it to real environment. By simulation we can understand the real environment of cloud computing and then after it successful result we can start deploying your application in cloud computing environment. By using the simulator it...

  7. High Performance Spaceflight Computing (HPSC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In 2012, the NASA Game Changing Development Program (GCDP), residing in the NASA Space Technology Mission Directorate (STMD), commissioned a High Performance...

  8. A GENETIC ALGORITHM FOR CONSTRUCTING BROADCAST TREES WITH COST AND DELAY CONSTRAINTS IN COMPUTER NETWORKS

    Directory of Open Access Journals (Sweden)

    Ahmed Y. Hamed

    2015-01-01

    Full Text Available We refer to the problem of constructing broadcast trees with cost and delay constraints in the networks as a delay-constrained minimum spanning tree problem in directed networks. Hence it is necessary determining a spanning tree of minimal cost to connect the source node to all nodes subject to delay constraints on broadcast routing. In this paper, we proposed a genetic algorithm for solving broadcast routing by finding the low-cost broadcast tree with minimum cost and delay constraints. In this research we present a genetic algorithm to find the broadcast routing tree of a given network in terms of its links. The algorithm uses the connection matrix of the given network to find the spanning trees and considers the weights of the links to obtain the minimum spanning tree. Our proposed algorithm is able to find a better solution, fast convergence speed and high reliability. The scalability and the performance of the algorithm with increasing number of network nodes are also encouraging.

  9. High performance/low cost accelerator control system

    Science.gov (United States)

    Magyary, S.; Glatz, J.; Lancaster, H.; Selph, F.; Fahmie, M.; Ritchie, A.; Timossi, C.; Hinkson, C.; Benjegerdes, R.

    1980-10-01

    Implementation of a high performance computer control system tailored to the requirements of the Super HILAC accelerator is described. This system uses a distributed structure with fiber optic data links; multiple CPUs operate in parallel at each node. A large number of the latest 16 bit microcomputer boards are used to get a significant processor bandwidth. Dynamically assigned and labeled knobs together with touch screens allow a flexible and efficient operator interface. An X-Y vector graphics system allows display and labeling of real time signals as well as general plotting functions. Both the accelerator parameters and the graphics system can be driven from BASIC interactive programs in addition to the precanned user routines.

  10. Low-Cost, Rugged High-Vacuum System

    Science.gov (United States)

    Sorensen, Paul; Kline-Schoder, Robert

    2012-01-01

    A need exists for miniaturized, rugged, low-cost high-vacuum systems. Recent advances in sensor technology have led to the development of very small mass spectrometer detectors as well as other analytical instruments such as scanning electron microscopes. However, the vacuum systems to support these sensors remain large, heavy, and power-hungry. To meet this need, a miniaturized vacuum system was developed based on a very small, rugged, and inexpensive-to-manufacture molecular drag pump (MDP). The MDP is enabled by a miniature, very-high-speed (200,000 rpm), rugged, low-power, brushless DC motor optimized for wide temperature operation and long life. The key advantages of the pump are reduced cost and improved ruggedness compared to other mechanical hig-hvacuum pumps. The machining of the rotor and stators is very simple compared to that necessary to fabricate rotor and stator blades for other pump designs. Also, the symmetry of the rotor is such that dynamic balancing of the rotor will likely not be necessary. Finally, the number of parts in the unit is cut by nearly a factor of three over competing designs. The new pump forms the heart of a complete vacuum system optimized to support analytical instruments in terrestrial applications and on spacecraft and planetary landers. The MDP achieves high vacuum coupled to a ruggedized diaphragm rough pump. Instead of the relatively complicated rotor and stator blades used in turbomolecular pumps, the rotor in the MDP consists of a simple, smooth cylinder of aluminum. This will turn at approximately 200,000 rpm inside an outer stator housing. The pump stator comprises a cylindrical aluminum housing with one or more specially designed grooves that serve as flow channels. To minimize the length of the pump, the gas is forced down the flow channels of the outer stator to the base of the pump. The gas is then turned and pulled toward the top through a second set of channels cut into an inner stator housing that surrounds the

  11. CRPC research into linear algebra software for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.; Walker, D.W. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Dongarra, J.J. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Pozo, R. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science; Sorensen, D.C. [Rice Univ., Houston, TX (United States). Dept. of Computational and Applied Mathematics

    1994-12-31

    In this paper the authors look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for high-performance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library for performing dense and banded linear algebra computations, and was designed to run efficiently on high-performance computers. The authors focus on the design of the distributed-memory version of LAPACK, and on an object-oriented interface to LAPACK.

  12. Low cost and high performance screen laminate regenerator matrix

    Energy Technology Data Exchange (ETDEWEB)

    Bin-Nun, Uri; Manitakos, Dan [FLIR Systems, North Billerica, MA (United States)

    2004-08-01

    A laminate screen matrix regenerator with 47 elements has been designed, analyzed, fabricated and tested. The laminate was fabricated from stainless steel screen sheets that were stacked on top of each other at certain angular orientation and then bonded at high temperature and pressure environment utilizing a sintering process. This laminate is a porous structure media with highly repeatable properties that can be controlled by varying mesh size, weave type, wire size and laminate sheet to sheet orientation. The flow direction in relation to the weave plan can be varied by cutting a cylindrical or rectangular laminate element along or across the weave. The regenerator flow resistance, thermal conductance losses, dead volume, surface area and heat transfer coefficient are analyzed. Regenerator cost and performance comparison data between the conventional widely used method of stacked screens and the new stacked laminate matrix regenerator is discussed. Also, a square stainless steel screen laminate was manufactured in a way which permits gas to flow along the screen wire instead of across it. (Author)

  13. Low cost and high performance screen laminate regenerator matrix

    Science.gov (United States)

    Bin-Nun, Uri; Manitakos, Dan

    2004-06-01

    A laminate screen matrix regenerator with 47 elements has been designed, analyzed, fabricated and tested. The laminate was fabricated from stainless steel screen sheets that were stacked on top of each other at certain angular orientation and then bonded at high temperature and pressure environment utilizing a sintering process. This laminate is a porous structure media with highly repeatable properties that can be controlled by varying mesh size, weave type, wire size and laminate sheet to sheet orientation. The flow direction in relation to the weave plan can be varied by cutting a cylindrical or rectangular laminate element along or across the weave. The regenerator flow resistance, thermal conductance losses, dead volume, surface area and heat transfer coefficient are analyzed. Regenerator cost and performance comparison data between the conventional widely used method of stacked screens and the new stacked laminate matrix regenerator is discussed. Also, a square stainless steel screen laminate was manufactured in a way which permits gas to flow along the screen wire instead of across it.

  14. Unenhanced computed tomography in acute renal colic reduces cost outside radiology department

    DEFF Research Database (Denmark)

    Lauritsen, J.; Andersen, J.R.; Nordling, J.

    2008-01-01

    BACKGROUND: Unenhanced multidetector computed tomography (UMDCT) is well established as the procedure of choice for radiologic evaluation of patients with renal colic. The procedure has both clinical and financial consequences for departments of surgery and radiology. However, the financial effect...... outside the radiology department is poorly elucidated. PURPOSE: To evaluate the financial consequences outside of the radiology department, a retrospective study comparing the ward occupation of patients examined with UMDCT to that of intravenous urography (IVU) was performed. MATERIAL AND METHODS......) saved the hospital USD 265,000 every 6 months compared to the use of IVU. CONCLUSION: Use of UMDCT compared to IVU in patients with renal colic leads to cost savings outside the radiology department Udgivelsesdato: 2008/12...

  15. Multiple sequence alignment with arbitrary gap costs: computing an optimal solution using polyhedral combinatorics.

    Science.gov (United States)

    Althaus, Ernst; Caprara, Alberto; Lenhof, Hans-Peter; Reinert, Knut

    2002-01-01

    Multiple sequence alignment is one of the dominant problems in computational molecular biology. Numerous scoring functions and methods have been proposed, most of which result in NP-hard problems. In this paper we propose for the first time a general formulation for multiple alignment with arbitrary gap-costs based on an integer linear program (ILP). In addition we describe a branch-and-cut algorithm to effectively solve the ILP to optimality. We evaluate the performances of our approach in terms of running time and quality of the alignments using the BAliBase database of reference alignments. The results show that our implementation ranks amongst the best programs developed so far.

  16. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  17. High-definition three-dimensional television disparity map computation

    Science.gov (United States)

    Chammem, Afef; Mitrea, Mihai; Prêteux, Françoise

    2012-10-01

    By reconsidering some two-dimensional video inherited approaches and by adapting them to the stereoscopic video content and to the human visual system peculiarities, a new disparity map is designed. First, the inner relation between the left and the right views is modeled by some weights discriminating between the horizontal and vertical disparities. Second, the block matching operation is achieved by considering a visual related measure (normalized cross correlation) instead of the traditional pixel differences (mean squared error or sum of absolute differences). The advanced three-dimensional (3-D) video-new three step search (3DV-NTSS) disparity map (3-D Video-New Three Step Search) is benchmarked against two state-of-the-art algorithms, namely NTSS and full-search MPEG (FS-MPEG), by successively considering two corpora. The first corpus was organized during the 3DLive French national project and regroups 20 min of stereoscopic video sequences. The second one, with similar size, is provided by the MPEG community. The experimental results demonstrate the effectiveness of 3DV-NTSS in both reconstructed image quality (average gains between 3% and 7% in both PSNR and structural similarity, with a singular exception) and computational cost (search operation number reduced by average factors between 1.3 and 13). The 3DV-NTSS was finally validated by designing a watermarking method for high definition 3-D TV content protection.

  18. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  19. Biomedical Requirements for High Productivity Computing Systems

    Science.gov (United States)

    2005-04-01

    operations performed by embedded C ++ libraries. While Python is not currently used directly for numerically intensive work it would be quite desirable if...performed by embedded C ++ libraries. The availability of a higher performance python solution is highly desirable, i.e. – a python compiler or better JIT...be desirable. - Virtually all high-level programming is now done in Python with numerically intensive operations performed by embedded C ++ libraries

  20. A high performance scientific cloud computing environment for materials simulations

    CERN Document Server

    Jorissen, Kevin; Rehr, John J

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditi...

  1. User manual for GEOCOST: a computer model for geothermal cost analysis. Volume 2. Binary cycle version

    Energy Technology Data Exchange (ETDEWEB)

    Huber, H.D.; Walter, R.A.; Bloomster, C.H.

    1976-03-01

    A computer model called GEOCOST has been developed to simulate the production of electricity from geothermal resources and calculate the potential costs of geothermal power. GEOCOST combines resource characteristics, power recovery technology, tax rates, and financial factors into one systematic model and provides the flexibility to individually or collectively evaluate their impacts on the cost of geothermal power. Both the geothermal reservoir and power plant are simulated to model the complete energy production system. In the version of GEOCOST in this report, geothermal fluid is supplied from wells distributed throughout a hydrothermal reservoir through insulated pipelines to a binary power plant. The power plant is simulated using a binary fluid cycle in which the geothermal fluid is passed through a series of heat exchangers. The thermodynamic state points in basic subcritical and supercritical Rankine cycles are calculated for a variety of working fluids. Working fluids which are now in the model include isobutane, n-butane, R-11, R-12, R-22, R-113, R-114, and ammonia. Thermodynamic properties of the working fluids at the state points are calculated using empirical equations of state. The Starling equation of state is used for hydrocarbons and the Martin-Hou equation of state is used for fluorocarbons and ammonia. Physical properties of working fluids at the state points are calculated.

  2. The Principals and Practice of Distributed High Throughput Computing

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  3. Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers

    Science.gov (United States)

    2013-09-01

    Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije Universiteit, Amsterdam, NL. (Invited Talk) [25] February...and middleware packages for polarizable force fields on multi-core and GPU systems, supported by the MapReduce paradigm. NSF MRI #0922657, $451,051...High-throughput Molecular Datasets for Scalable Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije

  4. Comparing computer experiments for fitting high-order polynomial metamodels

    OpenAIRE

    Johnson, Rachel T.; Montgomery, Douglas C.; Jones, Bradley; Parker, Peter T.

    2010-01-01

    The use of simulation as a modeling and analysis tool is wide spread. Simulation is an enabling tool for experimentally virtually on a validated computer environment. Often the underlying function for a computer experiment result has too much curvalture to be adequately modeled by a low-order polynomial. In such cases, finding an appropriate experimental design is not easy. We evaluate several computer experiments assuming the modeler is interested in fitting a high-order polynomial to th...

  5. Nuclear Forces and High-Performance Computing: The Perfect Match

    Energy Technology Data Exchange (ETDEWEB)

    Luu, T; Walker-Loud, A

    2009-06-12

    High-performance computing is now enabling the calculation of certain nuclear interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. We briefly describe the state of the field and describe how progress in this field will impact the greater nuclear physics community. We give estimates of computational requirements needed to obtain certain milestones and describe the scientific and computational challenges of this field.

  6. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  7. Molecular dynamics-based virtual screening: accelerating the drug discovery process by high-performance computing.

    Science.gov (United States)

    Ge, Hu; Wang, Yu; Li, Chanjuan; Chen, Nanhao; Xie, Yufang; Xu, Mengyan; He, Yingyan; Gu, Xinchun; Wu, Ruibo; Gu, Qiong; Zeng, Liang; Xu, Jun

    2013-10-28

    High-performance computing (HPC) has become a state strategic technology in a number of countries. One hypothesis is that HPC can accelerate biopharmaceutical innovation. Our experimental data demonstrate that HPC can significantly accelerate biopharmaceutical innovation by employing molecular dynamics-based virtual screening (MDVS). Without using HPC, MDVS for a 10K compound library with tens of nanoseconds of MD simulations requires years of computer time. In contrast, a state of the art HPC can be 600 times faster than an eight-core PC server is in screening a typical drug target (which contains about 40K atoms). Also, careful design of the GPU/CPU architecture can reduce the HPC costs. However, the communication cost of parallel computing is a bottleneck that acts as the main limit of further virtual screening improvements for drug innovations.

  8. Minimizing total costs of forest roads with computer-aided design model

    Indian Academy of Sciences (India)

    Abdullah E Akay

    2006-10-01

    Advances in personal computers (PCs) have increased interest in computer-based road-design systems to provide rapid evaluation of alternative alignments. Optimization techniques can provide road managers with a powerful tool that searches for large numbers of alternative alignments in short spans of time. A forest road optimization model, integrated with two optimization techniques, was developed to help a forest road engineer in evaluating alternative alignments in a faster and more systematic manner. The model aims at designing a path with minimum total road costs, while conforming to design specifications, environmental requirements, and driver safety. To monitor the sediment production of the alternative alignments, the average sediment delivered to a stream from a road section was estimated by using a road erosion/delivery model. The results indicated that this model has the potential to initiate a new procedure that will improve the forest road-design process by employing the advanced hardware and software capabilities of PCs and modern optimization techniques.

  9. Graph Contraction for Mapping Data on Parallel Computers: A Quality–Cost Tradeoff

    Directory of Open Access Journals (Sweden)

    R. Ponnusamy

    1994-01-01

    Full Text Available Mapping data to parallel computers aims at minimizing the execution time of the associated application. However, it can take an unacceptable amount of time in comparison with the execution time of the application if the size of the problem is large. In this article, first we motivate the case for graph contraction as a means for reducing the problem size. We restrict our discussion to applications where the problem domain can be described using a graph (e.g., computational fluid dynamics applications. Then we present a mapping-oriented parallel graph contraction (PGC heuristic algorithm that yields a smaller representation of the problem to which mapping is then applied. The mapping solution for the original problem is obtained by a straightforward interpolation. We then present experimental results on using contracted graphs as inputs to two physical optimization methods; namely, genetic algorithm and simulated annealing. The experimental results show that the PGC algorithm still leads to a reasonably good quality mapping solutions to the original problem, while producing a substantial reduction in mapping time. Finally, we discuss the cost-quality tradeoffs in performing graph contraction.

  10. How to avoid the high costs of physician turnover.

    Science.gov (United States)

    Berger, J E; Boyle, R L

    1992-01-01

    Physician recruitment is a complex, time consuming and competitive activity that is costly in terms of incurred expenses, administrative and physician time and lost revenue. Judith Berger and Robert Boyle, FACMGA, describe how to develop a well-designed retention and recruitment plan to avoid such costs.

  11. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  12. Comparison of high-speed rail and maglev system costs

    Energy Technology Data Exchange (ETDEWEB)

    Rote, D.M.

    1998-07-01

    This paper compares the two modes of transportation, and notes important similarities and differences in the technologies and in how they can be implemented to their best advantage. Problems with making fair comparisons of the costs and benefits are discussed and cost breakdowns based on data reported in the literature are presented and discussed in detail. Cost data from proposed and actual construction projects around the world are summarized and discussed. Results from the National Maglev Initiative and the recently-published Commercial Feasibility Study are included in the discussion. Finally, estimates will be given of the expected cost differences between HSR and maglev systems implemented under simple and complex terrain conditions. The extent to which the added benefits of maglev technology offset the added costs is examined.

  13. Transforming High School Physics with Modeling and Computation

    CERN Document Server

    Aiken, John M

    2013-01-01

    The Engage to Excel (PCAST) report, the National Research Council's Framework for K-12 Science Education, and the Next Generation Science Standards all call for transforming the physics classroom into an environment that teaches students real scientific practices. This work describes the early stages of one such attempt to transform a high school physics classroom. Specifically, a series of model-building and computational modeling exercises were piloted in a ninth grade Physics First classroom. Student use of computation was assessed using a proctored programming assignment, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Student views on computation and its link to mechanics was assessed with a written essay and a series of think-aloud interviews. This pilot study shows computation's ability for connecting scientific practice to the high school science classroom.

  14. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  15. Measurement of luminescence decays: High performance at low cost

    Science.gov (United States)

    Sulkes, Mark; Sulkes, Zoe

    2011-11-01

    The availability of inexpensive ultra bright LEDs spanning the visible and near-ultraviolet combined with the availability of inexpensive electronics equipment makes it possible to construct a high performance luminescence lifetime apparatus (˜5 ns instrumental response or better) at low cost. A central need for time domain measurement systems is the ability to obtain short (˜1 ns or less) excitation light pulses from the LEDs. It is possible to build the necessary LED driver using a simple avalanche transistor circuit. We describe first a circuit to test for small signal NPN transistors that can avalanche. We then describe a final optimized avalanche mode circuit that we developed on a prototyping board by measuring driven light pulse duration as a function of the circuit on the board and passive component values. We demonstrate that the combination of the LED pulser and a 1P28 photomultiplier tube used in decay waveform acquisition has a time response that allows for detection and lifetime determination of luminescence decays down to ˜5 ns. The time response and data quality afforded with the same components in time-correlated single photon counting are even better. For time-correlated single photon counting an even simpler NAND-gate based LED driver circuit is also applicable. We also demonstrate the possible utility of a simple frequency domain method for luminescence lifetime determinations.

  16. Domain Decomposition Based High Performance Parallel Computing

    CERN Document Server

    Raju, Mandhapati P

    2009-01-01

    The study deals with the parallelization of finite element based Navier-Stokes codes using domain decomposition and state-ofart sparse direct solvers. There has been significant improvement in the performance of sparse direct solvers. Parallel sparse direct solvers are not found to exhibit good scalability. Hence, the parallelization of sparse direct solvers is done using domain decomposition techniques. A highly efficient sparse direct solver PARDISO is used in this study. The scalability of both Newton and modified Newton algorithms are tested.

  17. Care Coordination Challenges Among High-Needs, High-Costs Older Adults in a Medigap Plan

    Science.gov (United States)

    Wells, Timothy S.; Bhattarai, Gandhi R.; Hawkins, Kevin; Cheng, Yan; Ruiz, Joann; Barnowski, Cynthia A.; Spivack, Barney; Yeh, Charlotte S.

    2016-01-01

    Purpose of the Study: Many adults 65 years or older have high health care needs and costs. Here, we describe their care coordination challenges. Primary Practice Setting: Individuals with an AARP Medicare Supplement Insurance plan insured by UnitedHealthcare Insurance Company (for New York residents, UnitedHealthcare Insurance Company of New York). Methodology and Sample: The three groups included the highest needs, highest costs (the “highest group”), the high needs, high costs (the “high group”), and the “all other group.” Eligibility was determined by applying an internally developed algorithm based upon a number of criteria, including hierarchical condition category score, the Optum ImpactPro prospective risk score, as well as diagnoses of coronary artery disease, congestive heart failure, or diabetes. Results: The highest group comprised 2%, although consumed 12% of health care expenditures. The high group comprised 20% and consumed 46% of expenditures, whereas the all other group comprised 78% and consumed 42% of expenditures. On average, the highest group had $102,798 in yearly health care expenditures, compared with $34,610 and $7,634 for the high and all other groups, respectively. Fifty-seven percent of the highest group saw 16 or more different providers annually, compared with 21% and 2% of the high and all other groups, respectively. Finally, 28% of the highest group had prescriptions from at least seven different providers, compared with 20% and 5% of the high and all other groups, respectively. Implications for Case Management Practice: Individuals with high health care needs and costs have visits to numerous health care providers and receive multiple prescriptions for pharmacotherapy. As a result, these individuals can become overwhelmed trying to manage and coordinate their health care needs. Care coordination programs may help these individuals coordinate their care. PMID:27301064

  18. High-Speed Computer-Controlled Switch-Matrix System

    Science.gov (United States)

    Spisz, E.; Cory, B.; Ho, P.; Hoffman, M.

    1985-01-01

    High-speed computer-controlled switch-matrix system developed for communication satellites. Satellite system controlled by onboard computer and all message-routing functions between uplink and downlink beams handled by newly developed switch-matrix system. Message requires only 2-microsecond interconnect period, repeated every millisecond.

  19. Manufacturing High-Quality Carbon Nanotubes at Lower Cost

    Science.gov (United States)

    Benavides, Jeanette M.; Lidecker, Henning

    2004-01-01

    A modified electric-arc welding process has been developed for manufacturing high-quality batches of carbon nanotubes at relatively low cost. Unlike in some other processes for making carbon nanotubes, metal catalysts are not used and, consequently, it is not necessary to perform extensive cleaning and purification. Also, unlike some other processes, this process is carried out at atmospheric pressure under a hood instead of in a closed, pressurized chamber; as a result, the present process can be implemented more easily. Although the present welding-based process includes an electric arc, it differs from a prior electric-arc nanotube-production process. The welding equipment used in this process includes an AC/DC welding power source with an integral helium-gas delivery system and circulating water for cooling an assembly that holds one of the welding electrodes (in this case, the anode). The cathode is a hollow carbon (optionally, graphite) rod having an outside diameter of 2 in. (approximately equal to 5.1 cm) and an inside diameter of 5/8 in. (approximately equal to 1.6 cm). The cathode is partly immersed in a water bath, such that it protrudes about 2 in. (about 5.1 cm) above the surface of the water. The bottom end of the cathode is held underwater by a clamp, to which is connected the grounding cable of the welding power source. The anode is a carbon rod 1/8 in. (approximately equal to 0.3 cm) in diameter. The assembly that holds the anode includes a thumbknob- driven mechanism for controlling the height of the anode. A small hood is placed over the anode to direct a flow of helium downward from the anode to the cathode during the welding process. A bell-shaped exhaust hood collects the helium and other gases from the process. During the process, as the anode is consumed, the height of the anode is adjusted to maintain an anode-to-cathode gap of 1 mm. The arc-welding process is continued until the upper end of the anode has been lowered to a specified height

  20. Scientific and high-performance computing at FAIR

    Directory of Open Access Journals (Sweden)

    Kisel Ivan

    2015-01-01

    Full Text Available Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core CPU/GPU compute nodes. To develop efficient and fast algorithms, which are optimized for parallel computations, is a challenge for the groups of experts dealing with the HPC computing. Here we present and discuss the status and perspectives of the data reconstruction and physics analysis software of one of the future FAIR experiments, namely, the CBM experiment.

  1. Resource estimation in high performance medical image computing.

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  2. High Energy Physics Experiments In Grid Computing Networks

    Directory of Open Access Journals (Sweden)

    Andrzej Olszewski

    2008-01-01

    Full Text Available The demand for computing resources used for detector simulations and data analysis in HighEnergy Physics (HEP experiments is constantly increasing due to the development of studiesof rare physics processes in particle interactions. The latest generation of experiments at thenewly built LHC accelerator at CERN in Geneva is planning to use computing networks fortheir data processing needs. A Worldwide LHC Computing Grid (WLCG organization hasbeen created to develop a Grid with properties matching the needs of these experiments. Inthis paper we present the use of Grid computing by HEP experiments and describe activitiesat the participating computing centers with the case of Academic Computing Center, ACKCyfronet AGH, Kraków, Poland.

  3. Ultra High Brightness/Low Cost Fiber Coupled Packaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The focus of the proposed effort is maximizing the brightness of fiber coupled laser diode pump sources at a minimum cost. The specific innovation proposed is to...

  4. A Low-Cost, High-Precision Navigator Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Toyon Research Corporation proposes to develop and demonstrate a prototype low-cost precision navigation system using commercial-grade gyroscopes and accelerometers....

  5. A Low Cost High Specific Stiffness Mirror Substrate Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The primary purpose of this proposal is to develop and demonstrate a new technology for manufacturing an ultra-low-cost precision optical telescope mirror which can...

  6. High performance computing: Clusters, constellations, MPPs, and future directions

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Sterling, Thomas; Simon, Horst; Strohmaier, Erich

    2003-06-10

    Last year's paper by Bell and Gray [1] examined past trends in high performance computing and asserted likely future directions based on market forces. While many of the insights drawn from this perspective have merit and suggest elements governing likely future directions for HPC, there are a number of points put forth that we feel require further discussion and, in certain cases, suggest alternative, more likely views. One area of concern relates to the nature and use of key terms to describe and distinguish among classes of high end computing systems, in particular the authors use of ''cluster'' to relate to essentially all parallel computers derived through the integration of replicated components. The taxonomy implicit in their previous paper, while arguable and supported by some elements of our community, fails to provide the essential semantic discrimination critical to the effectiveness of descriptive terms as tools in managing the conceptual space of consideration. In this paper, we present a perspective that retains the descriptive richness while providing a unifying framework. A second area of discourse that calls for additional commentary is the likely future path of system evolution that will lead to effective and affordable Petaflops-scale computing including the future role of computer centers as facilities for supporting high performance computing environments. This paper addresses the key issues of taxonomy, future directions towards Petaflops computing, and the important role of computer centers in the 21st century.

  7. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. Developing a High Performance Software Library with MPI and CUDA for Matrix Computations

    Directory of Open Access Journals (Sweden)

    Bogdan Oancea

    2014-04-01

    Full Text Available Nowadays, the paradigm of parallel computing is changing. CUDA is now a popular programming model for general purpose computations on GPUs and a great number of applications were ported to CUDA obtaining speedups of orders of magnitude comparing to optimized CPU implementations. Hybrid approaches that combine the message passing model with the shared memory model for parallel computing are a solution for very large applications. We considered a heterogeneous cluster that combines the CPU and GPU computations using MPI and CUDA for developing a high performance linear algebra library. Our library deals with large linear systems solvers because they are a common problem in the fields of science and engineering. Direct methods for computing the solution of such systems can be very expensive due to high memory requirements and computational cost. An efficient alternative are iterative methods which computes only an approximation of the solution. In this paper we present an implementation of a library that uses a hybrid model of computation using MPI and CUDA implementing both direct and iterative linear systems solvers. Our library implements LU and Cholesky factorization based solvers and some of the non-stationary iterative methods using the MPI/CUDA combination. We compared the performance of our MPI/CUDA implementation with classic programs written to be run on a single CPU.

  9. High Performance Computing Assets for Ocean Acoustics Research

    Science.gov (United States)

    2016-11-18

    that make them easily parallelizable in the manner that, for example, atmospheric or ocean general circulation models (GCMs) are parallel. Many GCMs...Enclosed is the Final Report for ONR Grant No. NOOO 14-15-1-2840 entitled "High Performance Computing Assets for Ocean Acoustjc Research," Principal...distribution is unlimited. ONR DURIP Grant Final Report High Performance Computing Assets for Ocean Acoustics Research Timothy F. Dud a Applied Ocean

  10. Dynamic Resource Management and Job Scheduling for High Performance Computing

    OpenAIRE

    2016-01-01

    Job scheduling and resource management plays an essential role in high-performance computing. Supercomputing resources are usually managed by a batch system, which is responsible for the effective mapping of jobs onto resources (i.e., compute nodes). From the system perspective, a batch system must ensure high system utilization and throughput, while from the user perspective it must ensure fast response times and fairness when allocating resources across jobs. Parallel jobs can be divide...

  11. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    Science.gov (United States)

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  12. The Application of Cloud Computing to Astronomy: A Study of Cost and Performance

    CERN Document Server

    Berriman, G Bruce; Juve, Gideon; Regelson, Moira; Plavchan, Peter

    2010-01-01

    Cloud computing is a powerful new technology that is widely used in the business world. Recently, we have been investigating the benefits it offers to scientific computing. We have used three workflow applications to compare the performance of processing data on the Amazon EC2 cloud with the performance on the Abe high-performance cluster at the National Center for Supercomputing Applications (NCSA). We show that the Amazon EC2 cloud offers better performance and value for processor- and memory-limited applications than for I/O-bound applications. We provide an example of how the cloud is well suited to the generation of a science product: an atlas of periodograms for the 210,000 light curves released by the NASA Kepler Mission. This atlas will support the identification of periodic signals, including those due to transiting exoplanets, in the Kepler data sets.

  13. Low-Dose Chest Computed Tomography for Lung Cancer Screening Among Hodgkin Lymphoma Survivors: A Cost-Effectiveness Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wattson, Daniel A., E-mail: dwattson@partners.org [Harvard Radiation Oncology Program, Boston, Massachusetts (United States); Hunink, M.G. Myriam [Departments of Radiology and Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands and Center for Health Decision Science, Harvard School of Public Health, Boston, Massachusetts (United States); DiPiro, Pamela J. [Department of Imaging, Dana-Farber Cancer Institute, Boston, Massachusetts (United States); Das, Prajnan [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Hodgson, David C. [Department of Radiation Oncology, University of Toronto, Toronto, Ontario (Canada); Mauch, Peter M.; Ng, Andrea K. [Department of Radiation Oncology, Brigham and Women' s Hospital and Dana-Farber Cancer Institute, Boston, Massachusetts (United States)

    2014-10-01

    Purpose: Hodgkin lymphoma (HL) survivors face an increased risk of treatment-related lung cancer. Screening with low-dose computed tomography (LDCT) may allow detection of early stage, resectable cancers. We developed a Markov decision-analytic and cost-effectiveness model to estimate the merits of annual LDCT screening among HL survivors. Methods and Materials: Population databases and HL-specific literature informed key model parameters, including lung cancer rates and stage distribution, cause-specific survival estimates, and utilities. Relative risks accounted for radiation therapy (RT) technique, smoking status (>10 pack-years or current smokers vs not), age at HL diagnosis, time from HL treatment, and excess radiation from LDCTs. LDCT assumptions, including expected stage-shift, false-positive rates, and likely additional workup were derived from the National Lung Screening Trial and preliminary results from an internal phase 2 protocol that performed annual LDCTs in 53 HL survivors. We assumed a 3% discount rate and a willingness-to-pay (WTP) threshold of $50,000 per quality-adjusted life year (QALY). Results: Annual LDCT screening was cost effective for all smokers. A male smoker treated with mantle RT at age 25 achieved maximum QALYs by initiating screening 12 years post-HL, with a life expectancy benefit of 2.1 months and an incremental cost of $34,841/QALY. Among nonsmokers, annual screening produced a QALY benefit in some cases, but the incremental cost was not below the WTP threshold for any patient subsets. As age at HL diagnosis increased, earlier initiation of screening improved outcomes. Sensitivity analyses revealed that the model was most sensitive to the lung cancer incidence and mortality rates and expected stage-shift from screening. Conclusions: HL survivors are an important high-risk population that may benefit from screening, especially those treated in the past with large radiation fields including mantle or involved-field RT. Screening

  14. Computer Vision Tools for Low-Cost and Noninvasive Measurement of Autism-Related Behaviors in Infants

    Directory of Open Access Journals (Sweden)

    Jordan Hashemi

    2014-01-01

    Full Text Available The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated which promote development and improve prognosis. Research on autism spectrum disorder (ASD suggests that behavioral signs can be observed late in the first year of life. Many of these studies involve extensive frame-by-frame video observation and analysis of a child's natural behavior. Although nonintrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are burdensome for clinical and large population research purposes. This work is a first milestone in a long-term project on non-invasive early observation of children in order to aid in risk detection and research of neurodevelopmental disorders. We focus on providing low-cost computer vision tools to measure and identify ASD behavioral signs based on components of the Autism Observation Scale for Infants (AOSI. In particular, we develop algorithms to measure responses to general ASD risk assessment tasks and activities outlined by the AOSI which assess visual attention by tracking facial features. We show results, including comparisons with expert and nonexpert clinicians, which demonstrate that the proposed computer vision tools can capture critical behavioral observations and potentially augment the clinician's behavioral observations obtained from real in-clinic assessments.

  15. Low-cost computing and network communication for a point-of-care device to perform a 3-part leukocyte differential

    Science.gov (United States)

    Powless, Amy J.; Feekin, Lauren E.; Hutcheson, Joshua A.; Alapat, Daisy V.; Muldoon, Timothy J.

    2016-03-01

    Point-of-care approaches for 3-part leukocyte differentials (granulocyte, monocyte, and lymphocyte), traditionally performed using a hematology analyzer within a panel of tests called a complete blood count (CBC), are essential not only to reduce cost but to provide faster results in low resource areas. Recent developments in lab-on-a-chip devices have shown promise in reducing the size and reagents used, relating to a decrease in overall cost. Furthermore, smartphone diagnostic approaches have shown much promise in the area of point-of-care diagnostics, but the relatively high per-unit cost may limit their utility in some settings. We present here a method to reduce computing cost of a simple epi-fluorescence imaging system using a Raspberry Pi (single-board computer, count and differential from a low volume (blood obtained via fingerstick. Additionally, the system utilizes a "cloud-based" approach to send image data from the Raspberry Pi to a main server and return results back to the user, exporting the bulk of the computational requirements. Six images were acquired per minute with up to 200 cells per field of view. Preliminary results showed that the differential count varied significantly in monocytes with a 1 minute time difference indicating the importance of time-gating to produce an accurate/consist differential.

  16. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  17. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  18. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  19. High flight costs, but low dive costs, in auks support the biomechanical hypothesis for flightlessness in penguins.

    Science.gov (United States)

    Elliott, Kyle H; Ricklefs, Robert E; Gaston, Anthony J; Hatch, Scott A; Speakman, John R; Davoren, Gail K

    2013-06-01

    Flight is a key adaptive trait. Despite its advantages, flight has been lost in several groups of birds, notably among seabirds, where flightlessness has evolved independently in at least five lineages. One hypothesis for the loss of flight among seabirds is that animals moving between different media face tradeoffs between maximizing function in one medium relative to the other. In particular, biomechanical models of energy costs during flying and diving suggest that a wing designed for optimal diving performance should lead to enormous energy costs when flying in air. Costs of flying and diving have been measured in free-living animals that use their wings to fly or to propel their dives, but not both. Animals that both fly and dive might approach the functional boundary between flight and nonflight. We show that flight costs for thick-billed murres (Uria lomvia), which are wing-propelled divers, and pelagic cormorants (Phalacrocorax pelagicus) (foot-propelled divers), are the highest recorded for vertebrates. Dive costs are high for cormorants and low for murres, but the latter are still higher than for flightless wing-propelled diving birds (penguins). For murres, flight costs were higher than predicted from biomechanical modeling, and the oxygen consumption rate during dives decreased with depth at a faster rate than estimated biomechanical costs. These results strongly support the hypothesis that function constrains form in diving birds, and that optimizing wing shape and form for wing-propelled diving leads to such high flight costs that flying ceases to be an option in larger wing-propelled diving seabirds, including penguins.

  20. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Corones, James [Krell Institute

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  1. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    Science.gov (United States)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  2. A Novel Algorithmic Cost Estimation Model Based on Soft Computing Technique

    Directory of Open Access Journals (Sweden)

    Iman Attarzadeh

    2010-01-01

    Full Text Available Problem statement: Software development effort estimation is the process of predicting the most realistic use of effort required for developing software based on some parameters. It has always characterized one of the biggest challenges in Computer Science for the last decades. Because time and cost estimate at the early stages of the software development are the most difficult to obtain and they are often the least accurate. Traditional algorithmic techniques such as regression models, Software Life Cycle Management (SLIM, COCOMO II model and function points, require an estimation process in a long term. But, nowadays that is not acceptable for software developers and companies. Newer soft computing techniques to effort estimation based on non-algorithmic techniques such as Fuzzy Logic (FL may offer an alternative for solving the problem. This work aims to propose a new fuzzy logic realistic model to achieve more accuracy in software effort estimation. The main objective of this research was to investigate the role of fuzzy logic technique in improving the effort estimation accuracy by characterizing inputs parameters using two-side Gaussian function which gave superior transition from one interval to another. Approach: The methodology adopted in this study was use of fuzzy logic approach rather than classical intervals in the COCOMO II. Using advantages of fuzzy logic such as fuzzy sets, inputs parameters can be specified by distribution of its possible values and these fuzzy sets were represented by membership functions. In this study to get a smoother transition in the membership function for input parameters, its associated linguistic values were represented by two-side Gaussian Membership Functions (2-D GMF and rules. Results: After analyzing the results attained by means of applying COCOMO II and proposed model based on fuzzy logic to the NASA dataset and created an artificial dataset, it had been found that proposed model was performing

  3. Life Cycle Cost Model for Very High Speed Integrated Circuits.

    Science.gov (United States)

    1984-09-01

    circuitry of the finished product level. ASD/ACCC has access to PRICE M on UNINET (30). 4. PRICE H (Programmed Review of Information for Costing and...AWAL/AAAS- 2) has successfully used PRICE H to analyze hardware acquisition costs. ASD/ACCC has access to PRICE H on UNINET ,-I" (13:1).• 5. PRICE L...ASD/ACCC has access to this model on UNINET (17:21). VHSIC Program Description As discussed earlier, LCC modeling includes all phases of a system’s

  4. Technical Evaluation Report 52: Audio/ Videoconferencing Packages: High cost

    Directory of Open Access Journals (Sweden)

    Urel Sawyers

    2005-11-01

    Full Text Available This report compares two integrated course delivery packages: Centra 6 and WebEx. Both applications feature asynchronous and synchronous audio communications for online education and training. They are relatively costly products, and provide useful comparisons with the two less expensive products to be evaluated in the following report #53. The criteria used in the current evaluation include capacity, interactivity features, integration with learning management systems, technical specifications, and cost. The report ends with a short analysis of the currently emerging audio-conferencing software, Google Talk.

  5. The high cost of free tuberculosis services: patient and household costs associated with tuberculosis care in Ebonyi State, Nigeria.

    Science.gov (United States)

    Ukwaja, Kingsley N; Alobu, Isaac; Lgwenyi, Chika; Hopewell, Philip C

    2013-01-01

    Poverty is both a cause and consequence of tuberculosis. The objective of this study is to quantify patient/household costs for an episode of tuberculosis (TB), its relationships with household impoverishment, and the strategies used to cope with the costs by TB patients in a resource-limited high TB/HIV setting. A cross-sectional study was conducted in three rural hospitals in southeast Nigeria. Consecutive adults with newly diagnosed pulmonary TB were interviewed to determine the costs each incurred in their care-seeking pathway using a standardised questionnaire. We defined direct costs as out-of-pocket payments, and indirect costs as lost income. Of 452 patients enrolled, majority were male 55% (249), and rural residents 79% (356), with a mean age of 34 (± 11.6) years. Median direct pre-diagnosis/diagnosis cost was $49 per patient. Median direct treatment cost was $36 per patient. Indirect pre-diagnostic and treatment costs were $416, or 79% of total patient costs, $528. The median total cost of TB care per household was $592; corresponding to 37% of median annual household income pre-TB. Most patients reported having to borrow money 212(47%), sell assets 42(9%), or both 144(32%) to cope with the cost of care. Following an episode of TB, household income reduced increasing the proportion of households classified as poor from 54% to 79%. Before TB illness, independent predictors of household poverty were; rural residence (adjusted odds ratio [aOR] 2.8), HIV-positive status (aOR 4.8), and care-seeking at a private facility (aOR 5.1). After TB care, independent determinants of household poverty were; younger age (≤ 35 years; aOR 2.4), male gender (aOR 2.1), and HIV-positive status (aOR 2.5). Patient and household costs for TB care are potentially catastrophic even where services are provided free-of-charge. There is an urgent need to implement strategies for TB care that are affordable for the poor.

  6. A review of High Performance Computing foundations for scientists

    CERN Document Server

    Ibáñez, Pablo García-Risueño Pablo E

    2012-01-01

    The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and di...

  7. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  8. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  9. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    Science.gov (United States)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  10. The Application of High Density Electronic Packaging for Spacecraft Cost and Mass Reduction

    Science.gov (United States)

    Lowry, Lynn E.; Prokop, Jon S.; Sandborn, Peter; Evans, Kristan

    1995-01-01

    It has become clear over the past few years that packaging of spacecraft electronic systems must be improved. Not only have the weight and volume taken up by conventional packaging and interconnect systems become excessive, but active devices have advanced to the point where system performance is often limited by the packaging. Since electronic systems account for up to 30% of the size and weight budgets of a spacecraft, the utilization of high density electronic packaging will be a very important path to overall spacecraft miniaturization. In the late 1970's high density interconnection technologies were being introduced into mainframe computer applications. Subsequently, these technologies have been applied to avionics, telecommunication, biomedical and automotive systems. In each application the driving forces behind the adoption of these technologies were; improved electrical performance, miniaturization, reduced power consumption, increased reliability and reduced manufacturing costs. The application of these technologies to planetary missions could provide significant benefits by way of reduced cost and design time if commercial technology and best commercial manufacturing practices are accepted. A mixed signal telecommunication function has been used as an example to illustrate the potential mass, volume and power reduction achievable with the implementation of high density packaging technologies. The tradeoff analysis which was performed demonstrated that packaging technology selection is application specific, and system level impact must be considered early on in the design process. The results of this study which compare size, performance, cost, risk and system level impact are given. Finally, the technical and cultural obstacles which have inhibited the implementation of these technologies is discussed. Specifically, the issues of space qualified hardware and technology availability is addressed. Space qualification is perceived by industry as being the

  11. The High Cost of Harsh Discipline and Its Disparate Impact

    Science.gov (United States)

    Rumberger, Russell W.; Losen, Daniel J.

    2016-01-01

    School suspension rates have been rising since the early 1970s, especially for children of color. One body of research has demonstrated that suspension from school is harmful to students, as it increases the risk of retention and school dropout. Another has demonstrated that school dropouts impose huge social costs on their states and localities,…

  12. Philosophy of design for low cost and high reliability

    DEFF Research Database (Denmark)

    Jørgensen, John Leif; Liebe, Carl Christian

    1996-01-01

    do extensive component testing and screening which addressed the issues of reliability, thermo-mechanical properties, and radiation sensitivity of the commercial IC's. The facility helped to control costs by generating early information on component survival in space. The development philosophy...... and system flexibility are addressed.KEY WORDS: Micro satellite, stellar compass, star tracker, attitude determination....

  13. A Low-Cost Computer-Controlled Arduino-Based Educational Laboratory System for Teaching the Fundamentals of Photovoltaic Cells

    Science.gov (United States)

    Zachariadou, K.; Yiasemides, K.; Trougkakos, N.

    2012-01-01

    We present a low-cost, fully computer-controlled, Arduino-based, educational laboratory (SolarInsight) to be used in undergraduate university courses concerned with electrical engineering and physics. The major goal of the system is to provide students with the necessary instrumentation, software tools and methodology in order to learn fundamental…

  14. A Low-Cost Computer-Controlled Arduino-Based Educational Laboratory System for Teaching the Fundamentals of Photovoltaic Cells

    Science.gov (United States)

    Zachariadou, K.; Yiasemides, K.; Trougkakos, N.

    2012-01-01

    We present a low-cost, fully computer-controlled, Arduino-based, educational laboratory (SolarInsight) to be used in undergraduate university courses concerned with electrical engineering and physics. The major goal of the system is to provide students with the necessary instrumentation, software tools and methodology in order to learn fundamental…

  15. A scalable-low cost architecture for high gain beamforming antennas

    KAUST Repository

    Bakr, Omar

    2010-10-01

    Many state-of-the-art wireless systems, such as long distance mesh networks and high bandwidth networks using mm-wave frequencies, require high gain antennas to overcome adverse channel conditions. These networks could be greatly aided by adaptive beamforming antenna arrays, which can significantly simplify the installation and maintenance costs (e.g., by enabling automatic beam alignment). However, building large, low cost beamforming arrays is very complicated. In this paper, we examine the main challenges presented by large arrays, starting from electromagnetic and antenna design and proceeding to the signal processing and algorithms domain. We propose 3-dimensional antenna structures and hybrid RF/digital radio architectures that can significantly reduce the complexity and improve the power efficiency of adaptive array systems. We also present signal processing techniques based on adaptive filtering methods that enhance the robustness of these architectures. Finally, we present computationally efficient vector quantization techniques that significantly improve the interference cancellation capabilities of analog beamforming architectures. © 2010 IEEE.

  16. A novel agent based autonomous and service composition framework for cost optimization of resource provisioning in cloud computing

    Directory of Open Access Journals (Sweden)

    Aarti Singh

    2017-01-01

    Full Text Available A cloud computing environment offers a simplified, centralized platform or resources for use when needed at a low cost. One of the key functionalities of this type of computing is to allocate the resources on an individual demand. However, with the expanding requirements of cloud user, the need of efficient resource allocation is also emerging. The main role of service provider is to effectively distribute and share the resources which otherwise would result into resource wastage. In addition to the user getting the appropriate service according to request, the cost of respective resource is also optimized. In order to surmount the mentioned shortcomings and perform optimized resource allocation, this research proposes a new Agent based Automated Service Composition (A2SC algorithm comprising of request processing and automated service composition phases and is not only responsible for searching comprehensive services but also considers reducing the cost of virtual machines which are consumed by on-demand services only.

  17. Computer Literacy and the Construct Validity of a High-Stakes Computer-Based Writing Assessment

    Science.gov (United States)

    Jin, Yan; Yan, Ming

    2017-01-01

    One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…

  18. Cost-Effectiveness Analysis in Practice: Interventions to Improve High School Completion

    Science.gov (United States)

    Hollands, Fiona; Bowden, A. Brooks; Belfield, Clive; Levin, Henry M.; Cheng, Henan; Shand, Robert; Pan, Yilin; Hanisch-Cerda, Barbara

    2014-01-01

    In this article, we perform cost-effectiveness analysis on interventions that improve the rate of high school completion. Using the What Works Clearinghouse to select effective interventions, we calculate cost-effectiveness ratios for five youth interventions. We document wide variation in cost-effectiveness ratios between programs and between…

  19. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    Science.gov (United States)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  20. A Low-Cost Time-Hopping Impulse Radio System for High Data Rate Transmission

    Directory of Open Access Journals (Sweden)

    Jinyun Zhang

    2005-03-01

    Full Text Available We present an efficient, low-cost implementation of time-hopping impulse radio that fulfills the spectral mask mandated by the FCC and is suitable for high-data-rate, short-range communications. Key features are (i all-baseband implementation that obviates the need for passband components, (ii symbol-rate (not chip rate sampling, A/D conversion, and digital signal processing, (iii fast acquisition due to novel search algorithms, and (iv spectral shaping that can be adapted to accommodate different spectrum regulations and interference environments. Computer simulations show that this system can provide 110 Mbps at 7–10 m distance, as well as higher data rates at shorter distances under FCC emissions limits. Due to the spreading concept of time-hopping impulse radio, the system can sustain multiple simultaneous users, and can suppress narrowband interference effectively.

  1. Visualization of flaws within heavy section ultrasonic test blocks using high energy computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    House, M.B.; Ross, D.M.; Janucik, F.X.; Friedman, W.D. [Lockheed Martin Corp., Schenectady, NY (United States); Yancey, R.N. [Advanced Research and Applications Corp., Dayton, OH (United States)

    1996-05-01

    The feasibility of high energy computed tomography (9 MeV) to detect volumetric and planar discontinuities in large pressure vessel mock-up blocks was studied. The data supplied by the manufacturer of the test blocks on the intended flaw geometry were compared to manual, contact ultrasonic test and computed tomography test data. Subsequently, a visualization program was used to construct fully three-dimensional morphological information enabling interactive data analysis on the detected flaws. Density isosurfaces show the relative shape and location of the volumetric defects within the mock-up blocks. Such a technique may be used to qualify personnel or newly developed ultrasonic test methods without the associated high cost of destructive evaluation. Data is presented showing the capability of the volumetric data analysis program to overlay the computed tomography and destructive evaluation (serial metallography) data for a direct, three-dimensional comparison.

  2. Developing on-demand secure high-performance computing services for biomedical data analytics.

    Science.gov (United States)

    Robison, Nicholas; Anderson, Nick

    2013-01-01

    We propose a technical and process model to support biomedical researchers requiring on-demand high performance computing on potentially sensitive medical datasets. Our approach describes the use of cost-effective, secure and scalable techniques for processing medical information via protected and encrypted computing clusters within a model High Performance Computing (HPC) environment. The process model supports an investigator defined data analytics platform capable of accepting secure data migration from local clinical research data silos into a dedicated analytic environment, and secure environment cleanup upon completion. We define metrics to support the evaluation of this pilot model through performance and stability tests, and describe evaluation of its suitability towards enabling rapid deployment by individual investigators.

  3. The Role of Computing in High-Energy Physics.

    Science.gov (United States)

    Metcalf, Michael

    1983-01-01

    Examines present and future applications of computers in high-energy physics. Areas considered include high-energy physics laboratories, accelerators, detectors, networking, off-line analysis, software guidelines, event sizes and volumes, graphics applications, event simulation, theoretical studies, and future trends. (JN)

  4. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  5. Condor-COPASI: high-throughput computing for biochemical networks

    Directory of Open Access Journals (Sweden)

    Kent Edward

    2012-07-01

    Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.

  6. Proceedings from the conference on high speed computing: High speed computing and national security

    Energy Technology Data Exchange (ETDEWEB)

    Hirons, K.P.; Vigil, M.; Carlson, R. [comps.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  7. High Performance Commodity Networking in a 512-CPU Teraflop Beowulf Cluster for Computational Astrophysics

    CERN Document Server

    Dubinski, J; Pen, U L; Loken, C; Martin, P; Dubinski, John; Humble, Robin; Loken, Chris; Martin, Peter; Pen, Ue-Li

    2003-01-01

    We describe a new 512-CPU Beowulf cluster with Teraflop performance dedicated to problems in computational astrophysics. The cluster incorporates a cubic network topology based on inexpensive commodity 24-port gigabit switches and point to point connections through the second gigabit port on each Linux server. This configuration has network performance competitive with more expensive cluster configurations and is scaleable to much larger systems using other network topologies. Networking represents only about 9% of our total system cost of USD$561K. The standard Top 500 HPL Linpack benchmark rating is 1.202 Teraflops on 512 CPUs so computing costs by this measure are $0.47/Megaflop. We also describe 4 different astrophysical applications using complex parallel algorithms for studying large-scale structure formation, galaxy dynamics, magnetohydrodynamic flows onto blackholes and planet formation currently running on the cluster and achieving high parallel performance. The MHD code achieved a sustained speed of...

  8. Half-quadratic cost function for computing arbitrary phase shifts and phase: Adaptive out of step phase shifting.

    Science.gov (United States)

    Rivera, Mariano; Bizuet, Rocky; Martinez, Amalia; Rayas, Juan A

    2006-04-17

    We present a phase shifting robust method for irregular and unknown phase steps. The method is formulated as the minimization of a half-quadratic (robust) regularized cost function for simultaneously computing phase maps and arbitrary phase shifts. The convergence to, at least, a local minimum is guaranteed. The algorithm can be understood as a phase refinement strategy that uses as initial guess a coarsely computed phase and coarsely estimated phase shifts. Such a coarse phase is assumed to be corrupted with artifacts produced by the use of a phase shifting algorithm but with imprecise phase steps. The refinement is achieved by iterating alternated minimization of the cost function for computing the phase map correction, an outliers rejection map and the phase shifts correction, respectively. The method performance is demonstrated by comparison with standard filtering and arbitrary phase steps detecting algorithms.

  9. The variation of acute treatment costs of trauma in high-income countries

    Directory of Open Access Journals (Sweden)

    Willenberg Lynsey

    2012-08-01

    Full Text Available Abstract Background In order to assist health service planning, understanding factors that influence higher trauma treatment costs is essential. The majority of trauma costing research reports the cost of trauma from the perspective of the receiving hospital. There has been no comprehensive synthesis and little assessment of the drivers of cost variation, such as country, trauma, subgroups and methods. The aim of this review is to provide a synthesis of research reporting the trauma treatment costs and factors associated with higher treatment costs in high income countries. Methods A systematic search for articles relating to the cost of acute trauma care was performed and included studies reporting injury severity scores (ISS, per patient cost/charge estimates; and costing methods. Cost and charge values were indexed to 2011 cost equivalents and converted to US dollars using purchasing power parities. Results A total of twenty-seven studies were reviewed. Eighty-one percent of these studies were conducted in high income countries including USA, Australia, Europe and UK. Studies either reported a cost (74.1% or charge estimate (25.9% for the acute treatment of trauma. Across studies, the median per patient cost of acute trauma treatment was $22,448 (IQR: $11,819-$33,701. However, there was variability in costing methods used with 18% of studies providing comprehensive cost methods. Sixty-three percent of studies reported cost or charge items incorporated in their cost analysis and 52% reported items excluded in their analysis. In all publications reviewed, predictors of cost included Injury Severity Score (ISS, surgical intervention, hospital and intensive care, length of stay, polytrauma and age. Conclusion The acute treatment cost of trauma is higher than other disease groups. Research has been largely conducted in high income countries and variability exists in reporting costing methods as well as the actual costs. Patient populations studied

  10. Bottom-Up Cost Analysis of a High Concentration PV Module

    Energy Technology Data Exchange (ETDEWEB)

    Horowitz, Kelsey A. W.; Woodhouse, Michael; Lee, Hohyun; Smestad, Greg P.

    2016-03-31

    We present a bottom-up model of III-V multi-junction cells, as well as a high concentration PV (HCPV) module. We calculate $0.59/W(DC) manufacturing costs for our model HCPV module design with today's capabilities, and find that reducing cell costs and increasing module efficiency offer the most promising paths for future cost reductions. Cell costs could be significantly reduced via substrate reuse and improved manufacturing yields.

  11. Many Mobile Health Apps Target High-Need, High-Cost Populations, But Gaps Remain.

    Science.gov (United States)

    Singh, Karandeep; Drouin, Kaitlin; Newmark, Lisa P; Lee, JaeHo; Faxvaag, Arild; Rozenblum, Ronen; Pabo, Erika A; Landman, Adam; Klinger, Elissa; Bates, David W

    2016-12-01

    With rising smartphone ownership, mobile health applications (mHealth apps) have the potential to support high-need, high-cost populations in managing their health. While the number of available mHealth apps has grown substantially, no clear strategy has emerged on how providers should evaluate and recommend such apps to patients. Key stakeholders, including medical professional societies, insurers, and policy makers, have largely avoided formally recommending apps, which forces patients to obtain recommendations from other sources. To help stakeholders overcome barriers to reviewing and recommending apps, we evaluated 137 patient-facing mHealth apps-those intended for use by patients to manage their health-that were highly rated by consumers and recommended by experts and that targeted high-need, high-cost populations. We found that there is a wide variety of apps in the marketplace but that few apps address the needs of the patients who could benefit the most. We also found that consumers' ratings were poor indications of apps' clinical utility or usability and that most apps did not respond appropriately when a user entered potentially dangerous health information. Going forward, data privacy and security will continue to be major concerns in the dissemination of mHealth apps. Project HOPE—The People-to-People Health Foundation, Inc.

  12. High Performance Low Cost Digitally Controlled Power Conversion Technology

    DEFF Research Database (Denmark)

    Jakobsen, Lars Tønnes

    2008-01-01

    Digital control of switch-mode power supplies and converters has within the last decade evolved from being an academic subject to an emerging market in the power electronics industry. This development has been pushed mainly by the computer industry that is looking towards digital power management...

  13. Offering Lung Cancer Screening to High-Risk Medicare Beneficiaries Saves Lives and Is Cost-Effective: An Actuarial Analysis

    Science.gov (United States)

    Pyenson, Bruce S.; Henschke, Claudia I.; Yankelevitz, David F.; Yip, Rowena; Dec, Ellynne

    2014-01-01

    Background By a wide margin, lung cancer is the most significant cause of cancer death in the United States and worldwide. The incidence of lung cancer increases with age, and Medicare beneficiaries are often at increased risk. Because of its demonstrated effectiveness in reducing mortality, lung cancer screening with low-dose computed tomography (LDCT) imaging will be covered without cost-sharing starting January 1, 2015, by nongrandfathered commercial plans. Medicare is considering coverage for lung cancer screening. Objective To estimate the cost and cost-effectiveness (ie, cost per life-year saved) of LDCT lung cancer screening of the Medicare population at high risk for lung cancer. Methods Medicare costs, enrollment, and demographics were used for this study; they were derived from the 2012 Centers for Medicare & Medicaid Services (CMS) beneficiary files and were forecast to 2014 based on CMS and US Census Bureau projections. Standard life and health actuarial techniques were used to calculate the cost and cost-effectiveness of lung cancer screening. The cost, incidence rates, mortality rates, and other parameters chosen by the authors were taken from actual Medicare data, and the modeled screenings are consistent with Medicare processes and procedures. Results Approximately 4.9 million high-risk Medicare beneficiaries would meet criteria for lung cancer screening in 2014. Without screening, Medicare patients newly diagnosed with lung cancer have an average life expectancy of approximately 3 years. Based on our analysis, the average annual cost of LDCT lung cancer screening in Medicare is estimated to be $241 per person screened. LDCT screening for lung cancer in Medicare beneficiaries aged 55 to 80 years with a history of ≥30 pack-years of smoking and who had smoked within 15 years is low cost, at approximately $1 per member per month. This assumes that 50% of these patients were screened. Such screening is also highly cost-effective, at <$19,000 per life

  14. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  15. Computational high-resolution optical imaging of the living human retina

    Science.gov (United States)

    Shemonski, Nathan D.; South, Fredrick A.; Liu, Yuan-Zhi; Adie, Steven G.; Scott Carney, P.; Boppart, Stephen A.

    2015-07-01

    High-resolution in vivo imaging is of great importance for the fields of biology and medicine. The introduction of hardware-based adaptive optics (HAO) has pushed the limits of optical imaging, enabling high-resolution near diffraction-limited imaging of previously unresolvable structures. In ophthalmology, when combined with optical coherence tomography, HAO has enabled a detailed three-dimensional visualization of photoreceptor distributions and individual nerve fibre bundles in the living human retina. However, the introduction of HAO hardware and supporting software adds considerable complexity and cost to an imaging system, limiting the number of researchers and medical professionals who could benefit from the technology. Here we demonstrate a fully automated computational approach that enables high-resolution in vivo ophthalmic imaging without the need for HAO. The results demonstrate that computational methods in coherent microscopy are applicable in highly dynamic living systems.

  16. Can We Build a Truly High Performance Computer Which is Flexible and Transparent?

    KAUST Repository

    Rojas, Jhonathan Prieto

    2013-09-10

    State-of-the art computers need high performance transistors, which consume ultra-low power resulting in longer battery lifetime. Billions of transistors are integrated neatly using matured silicon fabrication process to maintain the performance per cost advantage. In that context, low-cost mono-crystalline bulk silicon (100) based high performance transistors are considered as the heart of today\\'s computers. One limitation is silicon\\'s rigidity and brittleness. Here we show a generic batch process to convert high performance silicon electronics into flexible and semi-transparent one while retaining its performance, process compatibility, integration density and cost. We demonstrate high-k/metal gate stack based p-type metal oxide semiconductor field effect transistors on 4 inch silicon fabric released from bulk silicon (100) wafers with sub-threshold swing of 80 mV dec(-1) and on/off ratio of near 10(4) within 10% device uniformity with a minimum bending radius of 5 mm and an average transmittance of similar to 7% in the visible spectrum.

  17. Computer Security: SAHARA - Security As High As Reasonably Achievable

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    History has shown us time and again that our computer systems, computing services and control systems have digital security deficiencies. Too often we deploy stop-gap solutions and improvised hacks, or we just accept that it is too late to change things.    In my opinion, this blatantly contradicts the professionalism we show in our daily work. Other priorities and time pressure force us to ignore security or to consider it too late to do anything… but we can do better. Just look at how “safety” is dealt with at CERN! “ALARA” (As Low As Reasonably Achievable) is the objective set by the CERN HSE group when considering our individual radiological exposure. Following this paradigm, and shifting it from CERN safety to CERN computer security, would give us “SAHARA”: “Security As High As Reasonably Achievable”. In other words, all possible computer security measures must be applied, so long as ...

  18. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  19. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  20. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  1. The Impact of High Speed Machining on Computing and Automation

    Institute of Scientific and Technical Information of China (English)

    KKB Hon; BT Hang Tuah Baharudin

    2006-01-01

    Machine tool technologies, especially Computer Numerical Control (CNC) High Speed Machining (HSM) have emerged as effective mechanisms for Rapid Tooling and Manufacturing applications. These new technologies are attractive for competitive manufacturing because of their technical advantages, i.e. a significant reduction in lead-time, high product accuracy, and good surface finish. However, HSM not only stimulates advancements in cutting tools and materials, it also demands increasingly sophisticated CAD/CAM software, and powerful CNC controllers that require more support technologies. This paper explores the computational requirement and impact of HSM on CNC controller, wear detection,look ahead programming, simulation, and tool management.

  2. Potentially Low Cost Solution to Extend Use of Early Generation Computed Tomography

    Directory of Open Access Journals (Sweden)

    Tonna, Joseph E

    2010-12-01

    Full Text Available In preparing a case report on Brown-Séquard syndrome for publication, we made the incidental finding that the inexpensive, commercially available three-dimensional (3D rendering software we were using could produce high quality 3D spinal cord reconstructions from any series of two-dimensional (2D computed tomography (CT images. This finding raises the possibility that spinal cord imaging capabilities can be expanded where bundled 2D multi-planar reformats and 3D reconstruction software for CT are not available and in situations where magnetic resonance imaging (MRI is either not available or appropriate (e.g. metallic implants. Given the worldwide burden of trauma and considering the limited availability of MRI and advanced generation CT scanners, we propose an alternative, potentially useful approach to imaging spinal cord that might be useful in areas where technical capabilities and support are limited. [West J Emerg Med. 2010; 11(5:463-466.

  3. A project for developing a linear algebra library for high-performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, J.; Dongarra, J.; DuCroz, J.; Greenbaum, A.; Hammarling, S.; Sorensen, D.

    1988-01-01

    Argonne National Laboratory, the Courant Institute for Mathematical Sciences, and the Numerical Algorithms Group, Ltd., are developing a transportable linear algebra library in Fortran 77. The library is intended to provide a uniform set of subroutines to solve the most common linear algebra problems and to run efficiently on a wide range of high-performance computers. To be effective, the new library must satisfy several criteria. First, it must be highly efficient, or at least ''tunable'' to high efficiency, on each machine. Second, the user interface must be uniform across machines. Otherwise much of the convenience of portability will be lost. Third, the program must be widely available. NETLIB has demonstrated how useful and important it is for these codes to be available easily, and preferably on line. We intend to distribute the new library in a similar way, for no cost or a nominal cost only. In addition, the programs must be well documented.

  4. High-throughput all-atom molecular dynamics simulations using distributed computing.

    Science.gov (United States)

    Buch, I; Harvey, M J; Giorgino, T; Anderson, D P; De Fabritiis, G

    2010-03-22

    Although molecular dynamics simulation methods are useful in the modeling of macromolecular systems, they remain computationally expensive, with production work requiring costly high-performance computing (HPC) resources. We review recent innovations in accelerating molecular dynamics on graphics processing units (GPUs), and we describe GPUGRID, a volunteer computing project that uses the GPU resources of nondedicated desktop and workstation computers. In particular, we demonstrate the capability of simulating thousands of all-atom molecular trajectories generated at an average of 20 ns/day each (for systems of approximately 30 000-80 000 atoms). In conjunction with a potential of mean force (PMF) protocol for computing binding free energies, we demonstrate the use of GPUGRID in the computation of accurate binding affinities of the Src SH2 domain/pYEEI ligand complex by reconstructing the PMF over 373 umbrella sampling windows of 55 ns each (20.5 mus of total data). We obtain a standard free energy of binding of -8.7 +/- 0.4 kcal/mol within 0.7 kcal/mol from experimental results. This infrastructure will provide the basis for a robust system for high-throughput accurate binding affinity prediction.

  5. Challenges of high dam construction to computational mechanics

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chuhan

    2007-01-01

    The current situations and growing prospects of China's hydro-power development and high dam construction are reviewed,giving emphasis to key issues for safety evaluation of large dams and hydro-power plants,especially those associated with application of state-of-the-art computational mechanics.These include but are not limited to:stress and stability analysis of dam foundations under external loads;earthquake behavior of dam-foundation-reservoir systems,mechanical properties of mass concrete for dams,high velocity flow and energy dissipation for high dams,scientific and technical problems of hydro-power plants and underground structures,and newly developed types of dam-Roll Compacted Concrete (RCC) dams and Concrete Face Rock-fill (CFR)dams.Some examples demonstrating successful utilizations of computational mechanics in high dam engineering are given,including seismic nonlinear analysis for arch dam foundations,nonlinear fracture analysis of arch dams under reservoir loads,and failure analysis of arch dam-foundations.To make more use of the computational mechanics in high dam engineering,it is pointed out that much research including different computational methods,numerical models and solution schemes,and verifications through experimental tests and filed measurements is necessary in the future.

  6. Leveraging Cloud Computing to Improve Storage Durability, Availability, and Cost for MER Maestro

    Science.gov (United States)

    Chang, George W.; Powell, Mark W.; Callas, John L.; Torres, Recaredo J.; Shams, Khawaja S.

    2012-01-01

    The Maestro for MER (Mars Exploration Rover) software is the premiere operation and activity planning software for the Mars rovers, and it is required to deliver all of the processed image products to scientists on demand. These data span multiple storage arrays sized at 2 TB, and a backup scheme ensures data is not lost. In a catastrophe, these data would currently recover at 20 GB/hour, taking several days for a restoration. A seamless solution provides access to highly durable, highly available, scalable, and cost-effective storage capabilities. This approach also employs a novel technique that enables storage of the majority of data on the cloud and some data locally. This feature is used to store the most recent data locally in order to guarantee utmost reliability in case of an outage or disconnect from the Internet. This also obviates any changes to the software that generates the most recent data set as it still has the same interface to the file system as it did before updates

  7. A primer on high-throughput computing for genomic selection.

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  8. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  9. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  10. A High-Performance Communication Service for Parallel Servo Computing

    Directory of Open Access Journals (Sweden)

    Cheng Xin

    2010-11-01

    Full Text Available Complexity of algorithms for the servo control in the multi-dimensional, ultra-precise stage application has made multi-processor parallel computing technology needed. Considering the specific communication requirements in the parallel servo computing, we propose a communication service scheme based on VME bus, which provides high-performance data transmission and precise synchronization trigger support for the processors involved. Communications service is implemented on both standard VME bus and user-defined Internal Bus (IB, and can be redefined online. This paper introduces parallel servo computing architecture and communication service, describes structure and implementation details of each module in the service, and finally provides data transmission model and analysis. Experimental results show that communication services can provide high-speed data transmission with sub-nanosecond-level error of transmission latency, and synchronous trigger with nanosecond-level synchronization error. Moreover, the performance of communication service is not affected by the increasing number of processors.

  11. ABOUT THE SUITABILITY OF CLOUDS IN HIGH-PERFORMANCE COMPUTING

    Directory of Open Access Journals (Sweden)

    Harald Richter

    2016-01-01

    Full Text Available Cloud computing has become the ubiquitous computing and storage paradigm. It is also attractive for scientists, because they do not have to care any more for their own IT infrastructure, but can outsource it to a Cloud Service Provider of their choice. However, for the case of High-Performance Computing (HPC in a cloud, as it is needed in simulations or for Big Data analysis, things are getting more intricate, because HPC codes must stay highly efficient, even when executed by many virtual cores (vCPUs. Older clouds or new standard clouds can fulfil this only under special precautions, which are given in this article. The results can be extrapolated to other cloud OSes than OpenStack and to other codes than OpenFOAM, which were used as examples.

  12. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    Science.gov (United States)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  13. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    Science.gov (United States)

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  14. A computer-based model to assess costs associated with the use of factor VIII and factor IX one-stage and chromogenic activity assays.

    Science.gov (United States)

    Kitchen, S; Blakemore, J; Friedman, K D; Hart, D P; Ko, R H; Perry, D; Platton, S; Tan-Castillo, D; Young, G; Luddington, R J

    2016-04-01

    Measurement of coagulation factor factor VIII (FVIII) and factor IX (FIX) activity can be associated with a high level of variability using one-stage assays based on activated partial thromboplastin time (APTT). Chromogenic assays show less variability, but are less commonly used in clinical laboratories. In addition, one-stage assay accuracy using certain reagent and instrument combinations is compromised by some modified recombinant factor concentrates. Reluctance among some in the hematology laboratory community to adopt the use of chromogenic assays may be partly attributable to lack of familiarity and perceived higher associated costs. To identify and characterize key cost parameters associated with one-stage APTT and chromogenic assays for FVIII and FIX activity using a computer-based cost analysis model. A cost model for FVIII and FIX chromogenic assays relative to APTT assays was generated using assumptions derived from interviews with hematologists and laboratory scientists, common clinical laboratory practise, manufacturer list prices and assay kit configurations. Key factors that contribute to costs are factor-deficient plasma and kit reagents for one-stage and chromogenic assays, respectively. The stability of chromogenic assay kit reagents also limits the cost efficiency compared with APTT testing. Costs for chromogenic assays might be reduced by 50-75% using batch testing, aliquoting and freezing of kit reagents. Both batch testing and aliquoting of chromogenic kit reagents might improve cost efficiency for FVIII and FIX chromogenic assays, but would require validation. Laboratory validation and regulatory approval as well as education and training in the use of chromogenic assays might facilitate wider adoption by clinical laboratories. © 2016 The Authors. Journal of Thrombosis and Haemostasis published by Wiley Periodicals, Inc. on behalf of International Society on Thrombosis and Haemostasis.

  15. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    Science.gov (United States)

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

  16. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  17. High Performance Computing tools for the Integrated Tokamak Modelling project

    Energy Technology Data Exchange (ETDEWEB)

    Guillerminet, B., E-mail: bernard.guillerminet@cea.f [Association Euratom-CEA sur la Fusion, IRFM, DSM, CEA Cadarache (France); Plasencia, I. Campos [Instituto de Fisica de Cantabria (IFCA), CSIC, Santander (Spain); Haefele, M. [Universite Louis Pasteur, Strasbourg (France); Iannone, F. [EURATOM/ENEA Fusion Association, Frascati (Italy); Jackson, A. [University of Edinburgh (EPCC) (United Kingdom); Manduchi, G. [EURATOM/ENEA Fusion Association, Padova (Italy); Plociennik, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland); Sonnendrucker, E. [Universite Louis Pasteur, Strasbourg (France); Strand, P. [Chalmers University of Technology (Sweden); Owsiak, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland)

    2010-07-15

    Fusion Modelling and Simulation are very challenging and the High Performance Computing issues are addressed here. Toolset for jobs launching and scheduling, data communication and visualization have been developed by the EUFORIA project and used with a plasma edge simulation code.

  18. Artificial Intelligence and the High School Computer Curriculum.

    Science.gov (United States)

    Dillon, Richard W.

    1993-01-01

    Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…

  19. Seeking Solution: High-Performance Computing for Science. Background Paper.

    Science.gov (United States)

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    This is the second publication from the Office of Technology Assessment's assessment on information technology and research, which was requested by the House Committee on Science and Technology and the Senate Committee on Commerce, Science, and Transportation. The first background paper, "High Performance Computing & Networking for…

  20. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  1. Replica-Based High-Performance Tuple Space Computing

    DEFF Research Database (Denmark)

    Andric, Marina; De Nicola, Rocco; Lluch Lafuente, Alberto

    2015-01-01

    We present the tuple-based coordination language RepliKlaim, which enriches Klaim with primitives for replica-aware coordination. Our overall goal is to offer suitable solutions to the challenging problems of data distribution and locality in large-scale high performance computing. In particular,...

  2. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  3. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  4. 42 CFR 412.86 - Payment for extraordinarily high-cost day outliers.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Payment for extraordinarily high-cost day outliers... Outlier Cases, Special Treatment Payment for New Technology, and Payment Adjustment for Certain Replaced Devices Payment for Outlier Cases § 412.86 Payment for extraordinarily high-cost day outliers. For...

  5. Oregon's High School Dropouts: Examining the Economic and Social Costs. Research Brief

    Science.gov (United States)

    Foundation for Educational Choice, 2010

    2010-01-01

    The Foundation for Educational Choice recently commissioned a new study to examine the economic and social costs of Oregon's high school dropouts. Emily House, the study's author, analyzed how dropouts in the state dramatically impact state finances through reduced tax revenues, increased Medicaid costs, and high incarceration rates. House's study…

  6. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  7. A high-performance reconfigurable computing solution for Peptide mass fingerprinting.

    Science.gov (United States)

    Coca, Daniel; Bogdan, Istvan; Beynon, Robert J

    2010-01-01

    High-throughput, MS-based proteomics studies are generating very large volumes of biologically relevant data. Given the central role of proteomics in emerging fields such as system/synthetic biology and biomarker discovery, the amount of proteomic data is expected to grow at unprecedented rates over the next decades. At the moment, there is pressing need for high-performance computational solutions to accelerate the analysis and interpretation of this data.Performance gains achieved by grid computing in this area are not spectacular, especially given the significant power consumption, maintenance costs and floor space required by large server farms.This paper introduces an alternative, cost-effective high-performance bioinformatics solution for peptide mass fingerprinting based on Field Programmable Gate Array (FPGA) devices. At the heart of this approach stands the concept of mapping algorithms on custom digital hardware that can be programmed to run on FPGA. Specifically in this case, the entire computational flow associated with peptide mass fingerprinting, namely raw mass spectra processing and database searching, has been mapped on custom hardware processors that are programmed to run on a multi-FPGA system coupled with a conventional PC server. The system achieves an almost 2,000-fold speed-up when compared with a conventional implementation of the algorithms in software running on a 3.06 GHz Xeon PC server.

  8. Low Cost High Performance Nanostructured Spectrally Selective Coating

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Sungho [Univ. of California, San Diego, CA (United States)

    2017-04-05

    Sunlight absorbing coating is a key enabling technology to achieve high-temperature high-efficiency concentrating solar power operation. A high-performance solar absorbing material must simultaneously meet all the following three stringent requirements: high thermal efficiency (usually measured by figure of merit), high-temperature durability, and oxidation resistance. The objective of this research is to employ a highly scalable process to fabricate and coat black oxide nanoparticles onto solar absorber surface to achieve ultra-high thermal efficiency. Black oxide nanoparticles have been synthesized using a facile process and coated onto absorber metal surface. The material composition, size distribution and morphology of the nanoparticle are guided by numeric modeling. Optical and thermal properties have been both modeled and measured. High temperature durability has been achieved by using nanocomposites and high temperature annealing. Mechanical durability on thermal cycling have also been investigated and optimized. This technology is promising for commercial applications in next-generation high-temperature concentration solar power (CSP) plants.

  9. High resolution computed tomography for peripheral facial nerve paralysis

    Energy Technology Data Exchange (ETDEWEB)

    Koester, O.; Straehler-Pohl, H.J.

    1987-01-01

    High resolution computer tomographic examinations of the petrous bones were performed on 19 patients with confirmed peripheral facial nerve paralysis. High resolution CT provides accurate information regarding the extent, and usually regarding the type, of pathological process; this can be accurately localised with a view to possible surgical treatments. The examination also differentiates this from idiopathic paresis, which showed no radiological changes. Destruction of the petrous bone, without facial nerve symptoms, makes early suitable treatment mandatory.

  10. Cost evaluation of a DSN high level real-time language

    Science.gov (United States)

    Mckenzie, M.

    1977-01-01

    The hypothesis that the implementation of a DSN High Level Real Time Language will reduce real time software expenditures is explored. The High Level Real Time Language is found to be both affordable and cost-effective.

  11. Component-based software for high-performance scientific computing

    Science.gov (United States)

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis; Janssen, Curtis L.; Kenny, Joseph P.; Krishnan, Manojkumar; Kohl, James A.; Kumfert, Gary; Curfman McInnes, Lois; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  12. Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.

    Science.gov (United States)

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk

    2009-07-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.

  13. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  14. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2013-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures will be delivered over the 5 days of the School. A Poster Session will be held, at which students are welcome to present their research topics.

  15. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  16. Parallel computation of seismic analysis of high arch dam

    Institute of Scientific and Technical Information of China (English)

    Chen Houqun; Ma Huaifa; Tu Jin; Cheng Guangqing; Tang Juzhen

    2008-01-01

    Parallel computation programs are developed for three-dimensional meso-mechanics analysis of fully-graded dam concrete and seismic response analysis of high arch dams (ADs), based on the Parallel Finite Element Program Generator (PFEPG). The computational algorithms of the numerical simulation of the meso-structure of concrete specimens were studied. Taking into account damage evolution, static preload, strain rate effect, and the heterogeneity of the meso-structure of dam concrete, the fracture processes of damage evolution and configuration of the cracks can be directly simulated. In the seismic response analysis of ADs, all the following factors are involved, such as the nonlinear contact due to the opening and slipping of the contraction joints, energy dispersion of the far-field foundation, dynamic interactions of the dam-foundation-reservoir system, and the combining effects of seismic action with all static loads. The correctness, reliability and efficiency of the two parallel computational programs are verified with practical illustrations.

  17. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  18. Federal Plan for High-End Computing. Report of the High-End Computing Revitalization Task Force (HECRTF)

    Science.gov (United States)

    2004-07-01

    and other energy feedstock more efficiently. Signal Transduction Pathways Develop atomic-level computational models and simulations of complex...biomolecules to explain and predict cell signal pathways and their disrupters. Yield understanding of initiation of cancer and other diseases and their...calculations also introduces a requirement for a high degree of internodal connectivity (high bisection bandwidth). These needs cannot be met simply by

  19. Reduction of computer usage costs in predicting unsteady aerodynamic loadings caused by control surface motions: Computer program description

    Science.gov (United States)

    Petrarca, J. R.; Harrison, B. A.; Redman, M. C.; Rowe, W. S.

    1979-01-01

    A digital computer program was developed to calculate unsteady loadings caused by motions of lifting surfaces with leading edge and trailing edge controls based on the subsonic kernel function approach. The pressure singularities at hinge line and side edges were extracted analytically as a preliminary step to solving the integral equation of collocation. The program calculates generalized aerodynamic forces for user supplied deflection modes. Optional intermediate output includes pressure at an array of points, and sectional generalized forces. From one to six controls on the half span can be accomodated.

  20. Can broader diffusion of value-based insurance design increase benefits from US health care without increasing costs? Evidence from a computer simulation model.

    Directory of Open Access Journals (Sweden)

    R Scott Braithwaite

    2010-02-01

    Full Text Available BACKGROUND: Evidence suggests that cost sharing (i.e.,copayments and deductibles decreases health expenditures but also reduces essential care. Value-based insurance design (VBID has been proposed to encourage essential care while controlling health expenditures. Our objective was to estimate the impact of broader diffusion of VBID on US health care benefits and costs. METHODS AND FINDINGS: We used a published computer simulation of costs and life expectancy gains from US health care to estimate the impact of broader diffusion of VBID. Two scenarios were analyzed: (1 applying VBID solely to pharmacy benefits and (2 applying VBID to both pharmacy benefits and other health care services (e.g., devices. We assumed that cost sharing would be eliminated for high-value services ($300,000 per life-year. All costs are provided in 2003 US dollars. Our simulation estimated that approximately 60% of health expenditures in the US are spent on low-value services, 20% are spent on intermediate-value services, and 20% are spent on high-value services. Correspondingly, the vast majority (80% of health expenditures would have cost sharing that is impacted by VBID. With prevailing patterns of cost sharing, health care conferred 4.70 life-years at a per-capita annual expenditure of US$5,688. Broader diffusion of VBID to pharmaceuticals increased the benefit conferred by health care by 0.03 to 0.05 additional life-years, without increasing costs and without increasing out-of-pocket payments. Broader diffusion of VBID to other health care services could increase the benefit conferred by health care by 0.24 to 0.44 additional life-years, also without increasing costs and without increasing overall out-of-pocket payments. Among those without health insurance, using cost saving from VBID to subsidize insurance coverage would increase the benefit conferred by health care by 1.21 life-years, a 31% increase. CONCLUSION: Broader diffusion of VBID may amplify benefits from

  1. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    Science.gov (United States)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  2. DIPSY, a low-cost GPS application with high accuracy

    NARCIS (Netherlands)

    Heijden, W.F.M. van der

    1998-01-01

    To improve the control of unmanned aircraft flying out of visual range, the controller needs to be provided with realtime information about the position and behaviour of the drone during the flight. The position of the drone has to be presented with a relative high accuracy to obtain accurate flight

  3. Low cost, formable, high T(sub c) superconducting wire

    Science.gov (United States)

    Smialek, James L. (Inventor)

    1991-01-01

    A ceramic superconductivity part such as a wire is produced through the partial oxidation of a specially formulated copper alloy in the core. The alloys contain low level quantities of rare earth and alkaline earth dopant elements. Upon oxidation at high temperature, superconducting oxide phases are formed as a thin film.

  4. DIPSY, a low-cost GPS application with high accuracy

    NARCIS (Netherlands)

    Heijden, W.F.M. van der

    1999-01-01

    To improve the control of unmanned aircraft flying out of visual range, the controller needs to be provided with real-time information about the position and behaviour of the drone during the flight. The position of the drone has to be presented with a relative high accuracy to obtain accurate lligh

  5. Calculus in High School--At What Cost?

    Science.gov (United States)

    Sorge, D. H.; Wheatley, G. H.

    1977-01-01

    Evidence on the decline in preparation of entering calculus students and the relationship to high school preparation is presented, focusing on the trend toward the de-emphasis of trigonometry and analytic geometry in favor of calculus. Data on students' perception of the adequacy of their preparation are also presented. (Author/MN)

  6. Low cost routes to high purity silicon and derivatives thereof

    Energy Technology Data Exchange (ETDEWEB)

    Laine, Richard M; Krug, David James; Marchal, Julien Claudius; Mccolm, Andrew Stewart

    2013-07-02

    The present invention is directed to a method for providing an agricultural waste product having amorphous silica, carbon, and impurities; extracting from the agricultural waste product an amount of the impurities; changing the ratio of carbon to silica; and reducing the silica to a high purity silicon (e.g., to photovoltaic silicon).

  7. Low-Cost, High-Performance Analog Optical Links

    Science.gov (United States)

    2006-12-01

    5. BBR monolithic or integrated hybrid (long term solution, higher-risk, high-payoff implementation) Theoretical analysis of BBR We...85 BBR – integrated hybrid ................................................................................................... 90 BBR...for the DBR laser. ................................................ 12 Figure 11. Adiabatic chirp of the master-slave DBR laser (with left plot) and

  8. DIPSY, a low-cost GPS application with high accuracy

    NARCIS (Netherlands)

    Heijden, W.F.M. van der

    1999-01-01

    To improve the control of unmanned aircraft flying out of visual range, the controller needs to be provided with real-time information about the position and behaviour of the drone during the flight. The position of the drone has to be presented with a relative high accuracy to obtain accurate lligh

  9. DIPSY, a low-cost GPS application with high accuracy

    NARCIS (Netherlands)

    Heijden, W.F.M. van der

    1998-01-01

    To improve the control of unmanned aircraft flying out of visual range, the controller needs to be provided with realtime information about the position and behaviour of the drone during the flight. The position of the drone has to be presented with a relative high accuracy to obtain accurate flight

  10. How to Fight the High Cost of Curricular Glut

    Science.gov (United States)

    Bugeja, Michael

    2008-01-01

    Curriculum management is at the source of issues consuming academics, including high tuition, low adjunct pay, shared governance, graduate education, academic calendars, and budgetary models. The issue has the most impact at Ph.D.-granting public universities, but any institution can benefit from analyzing the source of poorly managed pedagogy,…

  11. DIPSY, a low-cost GPS application with high accuracy

    NARCIS (Netherlands)

    Heijden, W.F.M. van der

    1999-01-01

    To improve the control of unmanned aircraft flying out of visual range, the controller needs to be provided with real-time information about the position and behaviour of the drone during the flight. The position of the drone has to be presented with a relative high accuracy to obtain accurate

  12. Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing

    Science.gov (United States)

    Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.

    2007-05-01

    uniquely powerful computing power of the MareNostrum supercomputer in Barcelona to realize the promise of RTM, incorporate it into daily processing flows, and to help solve exploration problems in a highly cost-effective way. Uniquely, the Kaleidoscope Project is simultaneously integrating software (algorithms) and hardware (Cell BE), steps that are traditionally taken sequentially. This unique integration of software and hardware will accelerate seismic imaging by several orders of magnitude compared to conventional solutions running on standard Linux Clusters.

  13. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2014-01-01

    Full Text Available Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  14. Time-driven activity-based costing of low-dose-rate and high-dose-rate brachytherapy for low-risk prostate cancer.

    Science.gov (United States)

    Ilg, Annette M; Laviana, Aaron A; Kamrava, Mitchell; Veruttipong, Darlene; Steinberg, Michael; Park, Sang-June; Burke, Michael A; Niedzwiecki, Douglas; Kupelian, Patrick A; Saigal, Christopher

    Cost estimates through traditional hospital accounting systems are often arbitrary and ambiguous. We used time-driven activity-based costing (TDABC) to determine the true cost of low-dose-rate (LDR) and high-dose-rate (HDR) brachytherapy for prostate cancer and demonstrate opportunities for cost containment at an academic referral center. We implemented TDABC for patients treated with I-125, preplanned LDR and computed tomography based HDR brachytherapy with two implants from initial consultation through 12-month followup. We constructed detailed process maps for provision of both HDR and LDR. Personnel, space, equipment, and material costs of each step were identified and used to derive capacity cost rates, defined as price per minute. Each capacity cost rate was then multiplied by the relevant process time and products were summed to determine total cost of care. The calculated cost to deliver HDR was greater than LDR by $2,668.86 ($9,538 vs. $6,869). The first and second HDR treatment day cost $3,999.67 and $3,955.67, whereas LDR was delivered on one treatment day and cost $3,887.55. The greatest overall cost driver for both LDR and HDR was personnel at 65.6% ($4,506.82) and 67.0% ($6,387.27) of the total cost. After personnel costs, disposable materials contributed the second most for LDR ($1,920.66, 28.0%) and for HDR ($2,295.94, 24.0%). With TDABC, the true costs to deliver LDR and HDR from the health system perspective were derived. Analysis by physicians and hospital administrators regarding the cost of care afforded redesign opportunities including delivering HDR as one implant. Our work underscores the need to assess clinical outcomes to understand the true difference in value between these modalities. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  15. High energy density capacitors for low cost applications

    Science.gov (United States)

    Iyore, Omokhodion David

    Polyvinylidene fluoride (PVDF) and its copolymers with trifluoroethylene, hexafluoropropylene and chlorotrifluoroethylene are the most widely investigated ferroelectric polymers, due to their relatively high electromechanical properties and potential to achieve high energy density. [Bauer, 2010; Zhou et al., 2009] The research community has focused primarily on melt pressed or extruded films of PVDF-based polymers to obtain the highest performance with energy density up to 25 Jcm-3. [Zhou et al., 2009] Solution processing offers an inexpensive, low temperature alternative, which is also easily integrated with flexible electronics. This dissertation focuses on the fabrication of solution-based polyvinylidene fluoride-hexafluoropropylene metal-insulator-metal capacitors on flexible substrates using a photolithographic process. Capacitors were optimized for maximum energy density, high dielectric strength and low leakage current density. It is demonstrated that with the right choice of solvent, electrodes, spin-casting and annealing conditions, high energy density thin film capacitors can be fabricated repeatably and reproducibly. The high electric field dielectric constants were measured and the reliabilities of the polymer capacitors were also evaluated via time-zero breakdown and time-dependent breakdown techniques. Chapter 1 develops the motivation for this work and provides a theoretical overview of dielectric materials, polarization, leakage current and dielectric breakdown. Chapter 2 is a literature review of polymer-based high energy density dielectrics and covers ferroelectric polymers, highlighting PVDF and some of its derivatives. Chapter 3 summarizes some preliminary experimental work and presents materials and electrical characterization that support the rationale for materials selection and process development. Chapter 4 discusses the fabrication of solution-processed PVDF-HFP and modification of its properties by photo-crosslinking. It is followed by a

  16. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  17. High Resolution Muon Computed Tomography at Neutrino Beam Facilities

    CERN Document Server

    Suerfu, Burkhant

    2015-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pio...

  18. High performance computing for classic gravitational N-body systems

    CERN Document Server

    Capuzzo-Dolcetta, Roberto

    2009-01-01

    The role of gravity is crucial in astrophysics. It determines the evolution of any system, over an enormous range of time and space scales. Astronomical stellar systems as composed by N interacting bodies represent examples of self-gravitating systems, usually treatable with the aid of newtonian gravity but for particular cases. In this note I will briefly discuss some of the open problems in the dynamical study of classic self-gravitating N-body systems, over the astronomical range of N. I will also point out how modern research in this field compulsorily requires a heavy use of large scale computations, due to the contemporary requirement of high precision and high computational speed.

  19. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  20. Higher-order techniques in computational electromagnetics

    CERN Document Server

    Graglia, Roberto D

    2016-01-01

    Higher-Order Techniques in Computational Electromagnetics explains 'high-order' techniques that can significantly improve the accuracy, computational cost, and reliability of computational techniques for high-frequency electromagnetics, such as antennas, microwave devices and radar scattering applications.

  1. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Document Server

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  2. The role of interpreters in high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Naumann, Axel; /CERN; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  3. Cost and Performance-Based Resource Selection Scheme for Asynchronous Replicated System in Utility-Based Computing Environment

    Directory of Open Access Journals (Sweden)

    Wan Nor Shuhadah Wan Nik

    2017-04-01

    Full Text Available A resource selection problem for asynchronous replicated systems in utility-based computing environment is addressed in this paper. The needs for a special attention on this problem lies on the fact that most of the existing replication scheme in this computing system whether implicitly support synchronous replication and/or only consider read-only job. The problem is undoubtedly complex to be solved as two main issues need to be concerned simultaneously, i.e. 1 the difficulty on predicting the performance of the resources in terms of job response time, and 2 an efficient mechanism must be employed in order to measure the trade-off between the performance and the monetary cost incurred on resources so that minimum cost is preserved while providing low job response time. Therefore, a simple yet efficient algorithm that deals with the complexity of resource selection problem in utility-based computing systems is proposed in this paper. The problem is formulated as a Multi Criteria Decision Making (MCDM problem. The advantages of the algorithm are two-folds. On one fold, it hides the complexity of resource selection process without neglecting important components that affect job response time. The difficulty on estimating job response time is captured by representing them in terms of different QoS criteria levels at each resource. On the other fold, this representation further relaxed the complexity in measuring the trade-offs between the performance and the monetary cost incurred on resources. The experiments proved that our proposed resource selection scheme achieves an appealing result with good system performance and low monetary cost as compared to existing algorithms.

  4. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  5. Estimating the economic opportunity cost of water use with river basin simulators in a computationally efficient way

    Science.gov (United States)

    Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.

    2017-04-01

    The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear

  6. Data of cost-optimality and technical solutions for high energy performance buildings in warm climate.

    Science.gov (United States)

    Zacà, Ilaria; D'Agostino, Delia; Maria Congedo, Paolo; Baglivo, Cristina

    2015-09-01

    The data reported in this article refers to input and output information related to the research articles entitled Assessment of cost-optimality and technical solutions in high performance multi-residential buildings in the Mediterranean area by Zacà et al. (Assessment of cost-optimality and technical solutions in high performance multi-residential buildings in the Mediterranean area, in press.) and related to the research article Cost-optimal analysis and technical comparison between standard and high efficient mono residential buildings in a warm climate by Baglivo et al. (Energy, 2015, 10.1016/j.energy.2015.02.062, in press).

  7. High-Throughput Neuroimaging-Genetics Computational Infrastructure

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2014-04-01

    Full Text Available Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate and disseminate novel scientific methods, computational resources and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval and aggregation. Computational processing involves the necessary software, hardware and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical and phenotypic data and meta-data. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI and the Laboratory of Neuro Imaging (LONI at University of Southern California (USC. INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer’s and Parkinson’s data, we provide several examples of translational applications using this infrastructure.

  8. High-throughput neuroimaging-genetics computational infrastructure.

    Science.gov (United States)

    Dinov, Ivo D; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D; Franco, Joseph; Toga, Arthur W

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize

  9. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    Science.gov (United States)

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  10. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  11. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Julio Dondo Gazzano

    2015-01-01

    Full Text Available FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC. The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process.

  12. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  13. The Cost of Workplace Flexibility for High-Powered Professionals

    OpenAIRE

    Claudia D. Goldin; Katz, Lawrence F.

    2011-01-01

    The authors study the pecuniary penalties for family-related amenities in the workplace (e.g., job interruptions, short hours, part-time work, and flexibility during the workday), how women have responded to them, and how the penalties have changed over time. The pecuniary penalties to behaviors that are beneficial to family appear to have decreased in many professions. Self-employment has declined in many of the high-end professions (e.g., pharmacy, optometry, dentistry, law, medicine, and v...

  14. Forecast Skill and Computational Cost of the Correlation Models in 3DVAR Data Assimilation

    Science.gov (United States)

    2012-11-30

    extent this phenomenon can be explained by the presence of small-scale motions in the 3 km configuration that are barely constrained by the available...have a unit diagonal and requires appropriate renormalization by rescaling. The exact computation of the rescaling factors (diagonal elements of) is a...computationally expensive procedure, which needs an efficient numerical approximation. In this study approximate renormalization techniques based on

  15. Bottom-Up Cost Analysis of a High Concentration PV Module; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    Horowitz, K.; Woodhouse, M.; Lee, H.; Smestad, G.

    2015-04-13

    We present a bottom-up model of III-V multi-junction cells, as well as a high concentration PV (HCPV) module. We calculate $0.65/Wp(DC) manufacturing costs for our model HCPV module design with today’s capabilities, and find that reducing cell costs and increasing module efficiency offer the promising pathways for future cost reductions. Cell costs could be significantly reduced via an increase in manufacturing scale, substrate reuse, and improved manufacturing yields. We also identify several other significant drivers of HCPV module costs, including the Fresnel lens primary optic, module housing, thermal management, and the receiver board. These costs could potentially be lowered by employing innovative module designs.

  16. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  17. Controlling Capital Costs in High Performance Office Buildings: A Review of Best Practices for Overcoming Cost Barriers

    Energy Technology Data Exchange (ETDEWEB)

    Pless, S.; Torcellini, P.

    2012-05-01

    This paper presents a set of 15 best practices for owners, designers, and construction teams of office buildings to reach high performance goals for energy efficiency, while maintaining a competitive budget. They are based on the recent experiences of the owner and design/build team for the Research Support Facility (RSF) on National Renewable Energy Facility's campus in Golden, CO, which show that achieving this outcome requires each key integrated team member to understand their opportunities to control capital costs.

  18. Empathy costs: Negative emotional bias in high empathisers.

    Science.gov (United States)

    Chikovani, George; Babuadze, Lasha; Iashvili, Nino; Gvalia, Tamar; Surguladze, Simon

    2015-09-30

    Excessive empathy has been associated with compassion fatigue in health professionals and caregivers. We investigated an effect of empathy on emotion processing in 137 healthy individuals of both sexes. We tested a hypothesis that high empathy may underlie increased sensitivity to negative emotion recognition which may interact with gender. Facial emotion stimuli comprised happy, angry, fearful, and sad faces presented at different intensities (mild and prototypical) and different durations (500ms and 2000ms). The parameters of emotion processing were represented by discrimination accuracy, response bias and reaction time. We found that higher empathy was associated with better recognition of all emotions. We also demonstrated that higher empathy was associated with response bias towards sad and fearful faces. The reaction time analysis revealed that higher empathy in females was associated with faster (compared with males) recognition of mildly sad faces of brief duration. We conclude that although empathic abilities were providing for advantages in recognition of all facial emotional expressions, the bias towards emotional negativity may potentially carry a risk for empathic distress. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Low-Cost, High-Performance Combustion Chamber

    Science.gov (United States)

    Fortini, Arthur J.

    2015-01-01

    Ultramet designed and fabricated a lightweight, high-temperature combustion chamber for use with cryogenic LOX/CH4 propellants that can deliver a specific impulse of approx.355 seconds. This increase over the current 320-second baseline of nitrogen tetroxide/monomethylhydrazine (NTO/MMH) will result in a propellant mass decrease of 55 lb for a typical lunar mission. The material system was based on Ultramet's proven oxide-iridium/rhenium architecture, which has been hot-fire tested with stoichiometric oxygen/hydrogen for hours. Instead of rhenium, however, the structural material was a niobium or tantalum alloy that has excellent yield strength at both ambient and elevated temperatures. Phase I demonstrated alloys with yield strength-to-weight ratios more than three times that of rhenium, which will significantly reduce chamber weight. The starting materials were also two orders of magnitude less expensive than rhenium and were less expensive than the C103 niobium alloy commonly used in low-performance engines. Phase II focused on the design, fabrication, and hot-fire testing of a 12-lbf thrust class chamber with LOX/CH4, and a 100-lbf chamber for LOX/CH4. A 5-lbf chamber for NTO/MMH also was designed and fabricated.

  20. Optimum computer design of a composite cardan shaft according to the criteria of cost and weight

    Science.gov (United States)

    Nepershin, R. I.; Klimenov, V. V.

    1987-07-01

    A successive unconditional minimization algorithm using penalty functions without calculation of derivatives is an efficient approach to the computerized optimization of structural elements of hybrid composites with regard to cost and weight, given a small number of design variables and uncomplicated calculation of the objective function and limitations.

  1. Cost-Effectiveness Specification for Computer-Based Training Systems. Volume 1. Development

    Science.gov (United States)

    1977-09-01

    Included in this element are such places as auditoria , study halls, demonstration rooms, etc., where large numbers of students can be trained. Offices...etc.) which may accrue over several years will be permitted to surface and balance the large initial capital investment costs of implementing a

  2. Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques

    Science.gov (United States)

    Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández

    2013-01-01

    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…

  3. Computer assisted design study of a low-cost pressure sensor

    NARCIS (Netherlands)

    Meuwissen, M.H.H.; Veninga, E.P.; Tijdink, M.W.W.J.; Meijerink, M.G.H.

    2005-01-01

    The application of numerical techniques for the design of a low cost pressure sensor is described. The numerical techniques assist in addressing issues related to the thermo-mechanical performance of the sensor. This comprises the selection of the materials and dimensions used for the sensor itself

  4. Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques

    Science.gov (United States)

    Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández

    2013-01-01

    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…

  5. Robotic lower limb prosthesis design through simultaneous computer optimizations of human and prosthesis costs

    Science.gov (United States)

    Handford, Matthew L.; Srinivasan, Manoj

    2016-02-01

    Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost – even lower than assuming that the non-amputee’s ankle torques are cost-free.

  6. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  7. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    Energy Technology Data Exchange (ETDEWEB)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  8. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  9. A Computer Controlled Precision High Pressure Measuring System

    Science.gov (United States)

    Sadana, S.; Yadav, S.; Jha, N.; Gupta, V. K.; Agarwal, R.; Bandyopadhyay, A. K.; Saxena, T. K.

    2011-01-01

    A microcontroller (AT89C51) based electronics has been designed and developed for high precision calibrator based on Digiquartz pressure transducer (DQPT) for the measurement of high hydrostatic pressure up to 275 MPa. The input signal from DQPT is converted into a square wave form and multiplied through frequency multiplier circuit over 10 times to input frequency. This input frequency is multiplied by a factor of ten using phased lock loop. Octal buffer is used to store the calculated frequency, which in turn is fed to microcontroller AT89C51 interfaced with a liquid crystal display for the display of frequency as well as corresponding pressure in user friendly units. The electronics developed is interfaced with a computer using RS232 for automatic data acquisition, computation and storage. The data is acquired by programming in Visual Basic 6.0. This system is interfaced with the PC to make it a computer controlled system. The system is capable of measuring the frequency up to 4 MHz with a resolution of 0.01 Hz and the pressure up to 275 MPa with a resolution of 0.001 MPa within measurement uncertainty of 0.025%. The details on the hardware of the pressure measuring system, associated electronics, software and calibration are discussed in this paper.

  10. Scout: high-performance heterogeneous computing made simple

    Energy Technology Data Exchange (ETDEWEB)

    Jablin, James [Los Alamos National Laboratory; Mc Cormick, Patrick [Los Alamos National Laboratory; Herlihy, Maurice [BROWN UNIV.

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focus on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.

  11. Compact, Low-Cost, Frequency-Locked Semiconductor Laser for Injection Seeding High Power Laser Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This NASA Small Business Innovative Research Phase II project will develop a compact, low-cost, wavelength locked seed laser for injection locking high powered...

  12. Low-Cost and High-Performance Propulsion for Small Satellite Applications Project

    Data.gov (United States)

    National Aeronautics and Space Administration — While small satellites continue to show immense promise for high-capability and low-cost missions, they remain limited by post-deployment propulsion for a variety of...

  13. Very Low-Cost, Rugged, High-Vacuum System for Mass Spectrometers Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA, the DoD, DHS, and commercial industry have a pressing need for miniaturized, rugged, low-cost, high vacuum systems. Recent advances in sensor technology at...

  14. 78 FR 16808 - Connect America Fund; High-Cost Universal Service Support

    Science.gov (United States)

    2013-03-19

    ... establish any comparator groups.'' They argue that the benchmark ``formulas impose limitations on companies... modify the high cost loop support (HCLS) benchmarks. DATES: Effective March 19, 2013. FOR FURTHER... networks while requiring accountability from companies receiving support and ensuring fairness...

  15. Hummingbird - A Very Low Cost, High Delta V Spacecraft for Solar System Exploration Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Based on Microcosm's development of a high delta-V small Earth observation spacecraft called NanoEye, with a planned recurring cost of $2 million, Microcosm will...

  16. Very Low-Cost, Rugged, High-Vacuum System for Mass Spectrometers Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA, DoD, DHS, and commercial industry have a pressing need for miniaturized, rugged, low-cost high-vacuum systems. Recent advances in sensor technology at NASA and...

  17. Computer analysis of effects of altering jet fuel properties on refinery costs and yields

    Science.gov (United States)

    Breton, T.; Dunbar, D.

    1984-01-01

    This study was undertaken to evaluate the adequacy of future U.S. jet fuel supplies, the potential for large increases in the cost of jet fuel, and to what extent a relaxation in jet fuel properties would remedy these potential problems. The results of the study indicate that refiners should be able to meet jet fuel output requirements in all regions of the country within the current Jet A specifications during the 1990-2010 period. The results also indicate that it will be more difficult to meet Jet A specifications on the West Coast, because the feedstock quality is worse and the required jet fuel yield (jet fuel/crude refined) is higher than in the East. The results show that jet fuel production costs could be reduced by relaxing fuel properties. Potential cost savings in the East (PADDs I-IV) through property relaxation were found to be about 1.3 cents/liter (5 cents/gallon) in January 1, 1981 dollars between 1990 and 2010. However, the savings from property relaxation were all obtained within the range of current Jet A specifications, so there is no financial incentive to relax Jet A fuel specifications in the East. In the West (PADD V) the potential cost savings from lowering fuel quality were considerably greater than in the East. Cost savings from 2.7 to 3.7 cents/liter (10-14 cents/gallon) were found. In contrast to the East, on the West Coast a significant part of the savings was obtained through relaxation of the current Jet A fuel specifications.

  18. Enhancing simulation of efficiency with analytical tools. [combining computer simulation and analytical techniques for cost reduction

    Science.gov (United States)

    Seltzer, S. M.

    1974-01-01

    Some means of combining both computer simulation and anlytical techniques are indicated in order to mutually enhance their efficiency as design tools and to motivate those involved in engineering design to consider using such combinations. While the idea is not new, heavy reliance on computers often seems to overshadow the potential utility of analytical tools. Although the example used is drawn from the area of dynamics and control, the principles espoused are applicable to other fields. In the example the parameter plane stability analysis technique is described briefly and extended beyond that reported in the literature to increase its utility (through a simple set of recursive formulas) and its applicability (through the portrayal of the effect of varying the sampling period of the computer). The numerical values that were rapidly selected by analysis were found to be correct for the hybrid computer simulation for which they were needed. This obviated the need for cut-and-try methods to choose the numerical values, thereby saving both time and computer utilization.

  19. [Evolution of reimbursement of high-cost anticancer drugs: Financial impact within a university hospital].

    Science.gov (United States)

    Baudouin, Amandine; Fargier, Emilie; Cerruti, Ariane; Dubromel, Amélie; Vantard, Nicolas; Ranchon, Florence; Schwiertz, Vérane; Salles, Gilles; Souquet, Pierre-Jean; Thomas, Luc; Bérard, Frédéric; Nancey, Stéphane; Freyer, Gilles; Trillet-Lenoir, Véronique; Rioufol, Catherine

    2017-06-01

    In the context of health expenses control, reimbursement of high-cost medicines with a 'minor' or 'nonexistent' improvement in actual health benefit evaluated by the Haute Autorité de santé is revised by the decree of March 24, 2016 related to the procedure and terms of registration of high-cost pharmaceutical drugs. This study aims to set up the economic impact of this measure. A six months retrospective study was conducted within a French university hospital from July 1, 2015 to December 31, 2015. For each injectable high-cost anticancer drug prescribed to a patient with cancer, the therapeutic indication, its status in relation to the marketing authorization and the associated improvement in actual health benefit were examined. The total costs of these treatments, the cost per type of indication and, in the case of marketing authorization indications, the cost per improvement in actual health benefit were evaluated considering that all drugs affected by the decree would be struck off. Over six months, 4416 high-cost injectable anticancer drugs were prescribed for a total cost of 4.2 million euros. The costs of drugs with a minor or nonexistent improvement in actual benefit and which comparator is not onerous amount 557,564 euros. The reform of modalities of inscription on the list of onerous drugs represents a significant additional cost for health institutions (1.1 million euros for our hospital) and raises the question of the accessibility to these treatments for cancer patients. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  20. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  1. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  2. Disruptive Models in Primary Care: Caring for High-Needs, High-Cost Populations.

    Science.gov (United States)

    Hochman, Michael; Asch, Steven M

    2017-04-01

    Starfield and colleagues have suggested four overarching attributes of good primary care: "first-contact access for each need; long-term person- (not disease) focused care; comprehensive care for most health needs; and coordinated care when it must be sought elsewhere." As this series on reinventing primary care highlights, there is a compelling need for new care delivery models that would advance these objectives. This need is particularly urgent for high-needs, high-cost (HNHC) populations. By definition, HNHC patients require extensive attention and consume a disproportionate share of resources, and as a result they strain traditional office-based primary care practices. In this essay, we offer a clinical vignette highlighting the challenges of caring for HNHC populations. We then describe two categories of primary care-based approaches for managing HNHC populations: complex case management, and specialized clinics focused on HNHC patients. Although complex case management programs can be incorporated into or superimposed on the traditional primary care system, such efforts often fail to engage primary care clinicians and HNHC patients, and proven benefits have been modest to date. In contrast, specialized clinics for HNHC populations are more disruptive, as care for HNHC patients must be transferred to a multidisciplinary team that can offer enhanced care coordination and other support. Such specialized clinics may produce more substantial benefits, though rigorous evaluation of these programs is needed. We conclude by suggesting policy reforms to improve care for HNHC populations.

  3. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  4. Iterative coupling reservoir simulation on high performance computers

    Institute of Scientific and Technical Information of China (English)

    Lu Bo; Wheeler Mary F

    2009-01-01

    In this paper, the iterative coupling approach is proposed for applications to solving multiphase flow equation systems in reservoir simulation, as it provides a more flexible time-stepping strategy than existing approaches. The iterative method decouples the whole equation systems into pressure and saturation/concentration equations, and then solves them in sequence, implicitly and semi-implicitly. At each time step, a series of iterations are computed, which involve solving linearized equations using specific tolerances that are iteration dependent. Following convergence of subproblems, material balance is checked. Convergence of time steps is based on material balance errors. Key components of the iterative method include phase scaling for deriving a pressure equation and use of several advanced numerical techniques. The iterative model is implemented for parallel computing platforms and shows high parallel efficiency and scalability.

  5. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  6. Re-Engineering a High Performance Electrical Series Elastic Actuator for Low-Cost Industrial Applications

    Directory of Open Access Journals (Sweden)

    Kenan Isik

    2017-01-01

    Full Text Available Cost is an important consideration when transferring a technology from research to industrial and educational use. In this paper, we introduce the design of an industrial grade series elastic actuator (SEA performed via re-engineering a research grade version of it. Cost-constrained design requires careful consideration of the key performance parameters for an optimal performance-to-cost component selection. To optimize the performance of the new design, we started by matching the capabilities of a high-performance SEA while cutting down its production cost significantly. Our posit was that performing a re-engineering design process on an existing high-end device will significantly reduce the cost without compromising the performance drastically. As a case study of design for manufacturability, we selected the University of Texas Series Elastic Actuator (UT-SEA, a high-performance SEA, for its high power density, compact design, high efficiency and high speed properties. We partnered with an industrial corporation in China to research the best pricing options and to exploit the retail and production facilities provided by the Shenzhen region. We succeeded in producing a low-cost industrial grade actuator at one-third of the cost of the original device by re-engineering the UT-SEA with commercial off-the-shelf components and reducing the number of custom-made parts. Subsequently, we conducted performance tests to demonstrate that the re-engineered product achieves the same high-performance specifications found in the original device. With this paper, we aim to raise awareness in the robotics community on the possibility of low-cost realization of low-volume, high performance, industrial grade research and education hardware.

  7. Using the Black Scholes method for estimating high cost illness insurance premiums in Colombia

    Directory of Open Access Journals (Sweden)

    Liliana Chicaíza

    2009-04-01

    Full Text Available This article applied the Black-Scholes option valuation formula to calculating high-cost illness reinsurance premiums in the Colombian health system. The coverage pattern used in reinsuring high-cost illnesses was replicated by means of a European call option contract. The option’s relevant variables and parameters were adapted to an insurance market context. The premium estimated by the BlackScholes method fell within the range of premiums estimated by the actuarial method.

  8. Integrated Computational Materials Engineering (ICME) for Third Generation Advanced High-Strength Steel Development

    Energy Technology Data Exchange (ETDEWEB)

    Savic, Vesna; Hector, Louis G.; Ezzat, Hesham; Sachdev, Anil K.; Quinn, James; Krupitzer, Ronald; Sun, Xin

    2015-06-01

    This paper presents an overview of a four-year project focused on development of an integrated computational materials engineering (ICME) toolset for third generation advanced high-strength steels (3GAHSS). Following a brief look at ICME as an emerging discipline within the Materials Genome Initiative, technical tasks in the ICME project will be discussed. Specific aims of the individual tasks are multi-scale, microstructure-based material model development using state-of-the-art computational and experimental techniques, forming, toolset assembly, design optimization, integration and technical cost modeling. The integrated approach is initially illustrated using a 980 grade transformation induced plasticity (TRIP) steel, subject to a two-step quenching and partitioning (Q&P) heat treatment, as an example.

  9. Fundamental understanding and development of low-cost, high-efficiency silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    ROHATGI,A.; NARASIMHA,S.; MOSCHER,J.; EBONG,A.; KAMRA,S.; KRYGOWSKI,T.; DOSHI,P.; RISTOW,A.; YELUNDUR,V.; RUBY,DOUGLAS S.

    2000-05-01

    The overall objectives of this program are (1) to develop rapid and low-cost processes for manufacturing that can improve yield, throughput, and performance of silicon photovoltaic devices, (2) to design and fabricate high-efficiency solar cells on promising low-cost materials, and (3) to improve the fundamental understanding of advanced photovoltaic devices. Several rapid and potentially low-cost technologies are described in this report that were developed and applied toward the fabrication of high-efficiency silicon solar cells.

  10. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  11. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    Science.gov (United States)

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  12. The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case

    Energy Technology Data Exchange (ETDEWEB)

    Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Holzman, B. [Fermilab; Kennedy, R. [Fermilab; Norman, A. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab

    2017-03-15

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 25 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  13. Considerations on a Cost Model for High-Field Dipole Arc Magnets for FCC

    CERN Document Server

    AUTHOR|(CDS)2078700; Durante, Maria; Lorin, Clement; Martinez, Teresa; Ruuskanen, Janne; Salmi, Tiina; Sorbi, Massimo; Tommasini, Davide; Toral, Fernando

    2017-01-01

    In the frame of the European Circular Collider (EuroCirCol), a conceptual design study for a post-Large Hadron Collider (LHC) research infrastructure based on an energy-frontier 100 TeV circular hadron collider [1]–[3], a cost model for the high-field dipole arc magnets is being developed. The aim of the cost model in the initial design phase is to provide the basis for sound strategic decisions towards cost effective designs, in particular: (A) the technological choice of superconducting material and its cost, (B) the target performance of Nb3Sn superconductor, (C) the choice of operating temperature (D) the relevant design margins and their importance for cost, (E) the nature and extent of grading, and (F) the aperture’s influence on cost. Within the EuroCirCol study three design options for the high field dipole arc magnets are under study: cos − θ [4], block [5], and common-coil [6]. Here, in the advanced design phase, a cost model helps to (1) identify the cost drivers and feed-back this informati...

  14. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  15. Computer simulation of energy use, greenhouse gas emissions, and costs for alternative methods of processing fluid milk.

    Science.gov (United States)

    Tomasula, P M; Datta, N; Yee, W C F; McAloon, A J; Nutter, D W; Sampedro, F; Bonnaillie, L M

    2014-07-01

    Computer simulation is a useful tool for benchmarking electrical and fuel energy consumption and water use in a fluid milk plant. In this study, a computer simulation model of the fluid milk process based on high temperature, short time (HTST) pasteurization was extended to include models for processes for shelf-stable milk and extended shelf-life milk that may help prevent the loss or waste of milk that leads to increases in the greenhouse gas (GHG) emissions for fluid milk. The models were for UHT processing, crossflow microfiltration (MF) without HTST pasteurization, crossflow MF followed by HTST pasteurization (MF/HTST), crossflow MF/HTST with partial homogenization, and pulsed electric field (PEF) processing, and were incorporated into the existing model for the fluid milk process. Simulation trials were conducted assuming a production rate for the plants of 113.6 million liters of milk per year to produce only whole milk (3.25%) and 40% cream. Results showed that GHG emissions in the form of process-related CO₂ emissions, defined as CO₂ equivalents (e)/kg of raw milk processed (RMP), and specific energy consumptions (SEC) for electricity and natural gas use for the HTST process alone were 37.6g of CO₂e/kg of RMP, 0.14 MJ/kg of RMP, and 0.13 MJ/kg of RMP, respectively. Emissions of CO2 and SEC for electricity and natural gas use were highest for the PEF process, with values of 99.1g of CO₂e/kg of RMP, 0.44 MJ/kg of RMP, and 0.10 MJ/kg of RMP, respectively, and lowest for the UHT process at 31.4 g of CO₂e/kg of RMP, 0.10 MJ/kg of RMP, and 0.17 MJ/kg of RMP. Estimated unit production costs associated with the various processes were lowest for the HTST process and MF/HTST with partial homogenization at $0.507/L and highest for the UHT process at $0.60/L. The increase in shelf life associated with the UHT and MF processes may eliminate some of the supply chain product and consumer losses and waste of milk and compensate for the small increases in GHG

  16. Robust Coding for Lossy Computing with Receiver-Side Observation Costs

    CERN Document Server

    Ahmadi, Behzad

    2011-01-01

    An encoder wishes to minimize the bit rate necessary to guarantee that a decoder is able to calculate a symbol-wise function of a sequence available only at the encoder and a sequence that can be measured only at the decoder. This classical problem, first studied by Yamamoto, is addressed here by including two new aspects: (i) The decoder obtains noisy measurements of its sequence, where the quality of such measurements can be controlled via a cost-constrained "action" sequence; (ii) Measurement at the decoder may fail in a way that is unpredictable to the encoder, thus requiring robust encoding. The considered scenario generalizes known settings such as the Heegard-Berger-Kaspi and the "source coding with a vending machine" problems. The rate-distortion-cost function is derived in relevant special cases, along with general upper and lower bounds. Numerical examples are also worked out to obtain further insight into the optimal system design.

  17. Implementation and complexity of the watershed-from-markers algorithm computed as a minimal cost forest

    CERN Document Server

    Felkel, P; Wegenkittl, R; Felkel, Petr; Bruckwschwaiger, Mario; Wegenkittl, Rainer

    2001-01-01

    The watershed algorithm belongs to classical algorithms in mathematical morphology. Lotufo et al. published a principle of the watershed computation by means of an iterative forest transform (IFT), which computes a shortest path forest from given markers. The algorithm itself was described for a 2D case (image) without a detailed discussion of its computation and memory demands for real datasets. As IFT cleverly solves the problem of plateaus and as it gives precise results when thin objects have to be segmented, it is obvious to use this algorithm for 3D datasets taking in mind the minimizing of a higher memory consumption for the 3D case without loosing low asymptotical time complexity of O(m+C) (and also the real computation speed). The main goal of this paper is an implementation of the IFT algorithm with a priority queue with buckets and careful tuning of this implementation to reach as minimal memory consumption as possible. The paper presents five possible modifications and methods of implementation of...

  18. Low cost SCR lamp driver indicates contents of digital computer registers

    Science.gov (United States)

    Cliff, R. A.

    1967-01-01

    Silicon Controlled Rectifier /SCR/ lamp driver is adapted for use in integrated circuit digital computers where it indicates the contents of the various registers. The threshold voltage at which visual indication begins is very sharply defined and can be adjusted to suit particular system requirements.

  19. Low-cost digital image processing on a university mainframe computer. [considerations in selecting and/or designing instructional systems

    Science.gov (United States)

    Williams, T. H. L.

    1981-01-01

    The advantages and limitations of using university mainframe computers in digital image processing instruction are listed. Aspects to be considered when designing software for this purpose include not only two general audience, but also the capabilities of the system regarding the size of the image/subimage, preprocessing and enhancement functions, geometric correction and registration techniques; classification strategy, classification algorithm, multitemporal analysis, and ancilliary data and geographic information systems. The user/software/hardware interaction as well as acquisition and operating costs must also be considered.

  20. Development and implementation of a low cost micro computer system for LANDSAT analysis and geographic data base applications

    Science.gov (United States)

    Faust, N.; Jordon, L.

    1981-01-01

    Since the implementation of the GRID and IMGRID computer programs for multivariate spatial analysis in the early 1970's, geographic data analysis subsequently moved from large computers to minicomputers and now to microcomputers with radical reduction in the costs associated with planning analyses. Programs designed to process LANDSAT data to be used as one element in a geographic data base were used once NIMGRID (new IMGRID), a raster oriented geographic information system, was implemented on the microcomputer. Programs for training field selection, supervised and unsupervised classification, and image enhancement were added. Enhancements to the color graphics capabilities of the microsystem allow display of three channels of LANDSAT data in color infrared format. The basic microcomputer hardware needed to perform NIMGRID and most LANDSAT analyses is listed as well as the software available for LANDSAT processing.

  1. Register-based indicators for potentially inappropriate medication in high-cost patients with excessive polypharmacy.

    Science.gov (United States)

    Saastamoinen, Leena K; Verho, Jouko

    2015-06-01

    Excessive polypharmacy is often associated with inappropriate drug use. Because drug expenditures are heavily skewed and a considerable share of patients in the top 5% of the cost distribution have excessive polypharmacy, the appropriateness of their drug use should be reviewed. The aim of this study was to review the quality of drug use in patients with extremely high costs and excessive polypharmacy and to compare them with all drug users. This is a nationwide register study. The subjects of this study were all drug users in Finland over 15 years of age, n = 3,303,813. The measures used were annual total costs, average costs, and number of patients. The background characteristics used included gender, age, morbidity, number of prescribers, active substances, and indicators of potentially inappropriate drug use, for example, Beers criteria. The patients with high costs and excessive polypharmacy accounted for 22% of the total pharmaceutical expenditures but only 3% of drug users. One-third of them were elderly, compared with 11.3% of all drug users (p polypharmacy patients used more potentially inappropriate (28.0% vs 19.9%, p polypharmacy with inappropriate medication use should be prevented using all the methods. The patients with excessive polypharmacy and high-drug costs provide a most interesting group for containing pharmaceutical costs via medication reviews. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Towards robust dynamical decoupling and high fidelity adiabatic quantum computation

    Science.gov (United States)

    Quiroz, Gregory

    Quantum computation (QC) relies on the ability to implement high-fidelity quantum gate operations and successfully preserve quantum state coherence. One of the most challenging obstacles for reliable QC is overcoming the inevitable interaction between a quantum system and its environment. Unwanted interactions result in decoherence processes that cause quantum states to deviate from a desired evolution, consequently leading to computational errors and loss of coherence. Dynamical decoupling (DD) is one such method, which seeks to attenuate the effects of decoherence by applying strong and expeditious control pulses solely to the system. Provided the pulses are applied over a time duration sufficiently shorter than the correlation time associated with the environment dynamics, DD effectively averages out undesirable interactions and preserves quantum states with a low probability of error, or fidelity loss. In this study various aspects of this approach are studied from sequence construction to applications of DD to protecting QC. First, a comprehensive examination of the error suppression properties of a near-optimal DD approach is given to understand the relationship between error suppression capabilities and the number of required DD control pulses in the case of ideal, instantaneous pulses. While such considerations are instructive for examining DD efficiency, i.e., performance vs the number of control pulses, high-fidelity DD in realizable systems is difficult to achieve due to intrinsic pulse imperfections which further contribute to decoherence. As a second consideration, it is shown how one can overcome this hurdle and achieve robustness and recover high-fidelity DD in the presence of faulty control pulses using Genetic Algorithm optimization and sequence symmetrization. Thirdly, to illustrate the implementation of DD in conjunction with QC, the utilization of DD and quantum error correction codes (QECCs) as a protection method for adiabatic quantum

  3. Chip-to-board interconnects for high-performance computing

    Science.gov (United States)

    Riester, Markus B. K.; Houbertz-Krauss, Ruth; Steenhusen, Sönke

    2013-02-01

    Super computing is reaching out to ExaFLOP processing speeds, creating fundamental challenges for the way that computing systems are designed and built. One governing topic is the reduction of power used for operating the system, and eliminating the excess heat generated from the system. Current thinking sees optical interconnects on most interconnect levels to be a feasible solution to many of the challenges, although there are still limitations to the technical solutions, in particular with regard to manufacturability. This paper explores drivers for enabling optical interconnect technologies to advance into the module and chip level. The introduction of optical links into High Performance Computing (HPC) could be an option to allow scaling the manufacturing technology to large volume manufacturing. This will drive the need for manufacturability of optical interconnects, giving rise to other challenges that add to the realization of this type of interconnection. This paper describes a solution that allows the creation of optical components on module level, integrating optical chips, laser diodes or PIN diodes as components much like the well known SMD components used for electrical components. The paper shows the main challenges and potential solutions to this challenge and proposes a fundamental paradigm shift in the manufacturing of 3-dimensional optical links for the level 1 interconnect (chip package).

  4. Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems.

    Science.gov (United States)

    Chiu, Matt; Herbordt, Martin C

    2010-11-01

    The acceleration of molecular dynamics (MD) simulations using high-performance reconfigurable computing (HPRC) has been much studied. Given the intense competition from multicore and GPUs, there is now a question whether MD on HPRC can be competitive. We concentrate here on the MD kernel computation: determining the short-range force between particle pairs. In one part of the study, we systematically explore the design space of the force pipeline with respect to arithmetic algorithm, arithmetic mode, precision, and various other optimizations. We examine simplifications and find that some have little effect on simulation quality. In the other part, we present the first FPGA study of the filtering of particle pairs with nearly zero mutual force, a standard optimization in MD codes. There are several innovations, including a novel partitioning of the particle space, and new methods for filtering and mapping work onto the pipelines. As a consequence, highly efficient filtering can be implemented with only a small fraction of the FPGA's resources. Overall, we find that, for an Altera Stratix-III EP3ES260, 8 force pipelines running at nearly 200 MHz can fit on the FPGA, and that they can perform at 95% efficiency. This results in an 80-fold per core speed-up for the short-range force, which is likely to make FPGAs highly competitive for MD.

  5. Computational characterization of high temperature composites via METCAN

    Science.gov (United States)

    Brown, H. C.; Chamis, Christos C.

    1991-01-01

    The computer code 'METCAN' (METal matrix Composite ANalyzer) developed at NASA Lewis Research Center can be used to predict the high temperature behavior of metal matrix composites using the room temperature constituent properties. A reference manual that characterizes some common composites is being developed from METCAN generated data. Typical plots found in the manual are shown for graphite/copper. These include plots of stress-strain, elastic and shear moduli, Poisson's ratio, thermal expansion, and thermal conductivity. This manual can be used in the preliminary design of structures and as a guideline for the behavior of other composite systems.

  6. PRaVDA: High Energy Physics towards proton Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Price, T., E-mail: t.price@bham.ac.uk

    2016-07-11

    Proton radiotherapy is an increasingly popular modality for treating cancers of the head and neck, and in paediatrics. To maximise the potential of proton radiotherapy it is essential to know the distribution, and more importantly the proton stopping powers, of the body tissues between the proton beam and the tumour. A stopping power map could be measured directly, and uncertainties in the treatment vastly reduce, if the patient was imaged with protons instead of conventional x-rays. Here we outline the application of technologies developed for High Energy Physics to provide clinical-quality proton Computed Tomography, in so reducing range uncertainties and enhancing the treatment of cancer.

  7. Computational Proteomics: High-throughput Analysis for Systems Biology

    Energy Technology Data Exchange (ETDEWEB)

    Cannon, William R.; Webb-Robertson, Bobbie-Jo M.

    2007-01-03

    High-throughput (HTP) proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. The HTP technological advances are fueling a revolution in biology, enabling analyses at the scales of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understanding the underlying complexity and operating mechanisms of the overall system. Systems level investigations are relying more and more on computational analyses, especially in the field of proteomics generating large-scale global data.

  8. SCEC Earthquake System Science Using High Performance Computing

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes

  9. Advanced Computational Modeling of Vapor Deposition in a High-Pressure Reactor

    Science.gov (United States)

    Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus

    2004-01-01

    In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.

  10. Automated packaging platform for low-cost high-performance optical components manufacturing

    Science.gov (United States)

    Ku, Robert T.

    2004-05-01

    Delivering high performance integrated optical components at low cost is critical to the continuing recovery and growth of the optical communications industry. In today's market, network equipment vendors need to provide their customers with new solutions that reduce operating expenses and enable new revenue generating IP services. They must depend on the availability of highly integrated optical modules exhibiting high performance, small package size, low power consumption, and most importantly, low cost. The cost of typical optical system hardware is dominated by linecards that are in turn cost-dominated by transmitters and receivers or transceivers and transponders. Cost effective packaging of optical components in these small size modules is becoming the biggest challenge to be addressed. For many traditional component suppliers in our industry, the combination of small size, high performance, and low cost appears to be in conflict and not feasible with conventional product design concepts and labor intensive manual assembly and test. With the advent of photonic integration, there are a variety of materials, optics, substrates, active/passive devices, and mechanical/RF piece parts to manage in manufacturing to achieve high performance at low cost. The use of automation has been demonstrated to surpass manual operation in cost (even with very low labor cost) as well as product uniformity and quality. In this paper, we will discuss the value of using an automated packaging platform.for the assembly and test of high performance active components, such as 2.5Gb/s and 10 Gb/s sources and receivers. Low cost, high performance manufacturing can best be achieved by leveraging a flexible packaging platform to address a multitude of laser and detector devices, integration of electronics and handle various package bodies and fiber configurations. This paper describes the operation and results of working robotic assemblers in the manufacture of a Laser Optical Subassembly

  11. Study on the fuel cycle cost of gas turbine high temperature reactor (GTHTR300). Contract research

    Energy Technology Data Exchange (ETDEWEB)

    Takei, Masanobu; Katanishi, Shoji; Nakata, Tetsuo; Kunitomi, Kazuhiko [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment; Oda, Takefumi; Izumiya, Toru [Nuclear Fuel Industries, Ltd., Tokyo (Japan)

    2002-11-01

    In the basic design of gas turbine high temperature reactor (GTHTR300), reduction of the fuel cycle cost has a large benefit of improving overall plant economy. Then, fuel cycle cost was evaluated for GTHTR300. First, of fuel fabrication for high-temperature gas cooled reactor, since there was no actual experience with a commercial scale, a preliminary design for a fuel fabrication plant with annual processing of 7.7 ton-U sufficient four GTHTR300 was performed, and fuel fabrication cost was evaluated. Second, fuel cycle cost was evaluated based on the equilibrium cycle of GTHTR300. The factors which were considered in this cost evaluation include uranium price, conversion, enrichment, fabrication, storage of spent fuel, reprocessing, and waste disposal. The fuel cycle cost of GTHTR300 was estimated at about 1.07 yen/kWh. If the back-end cost of reprocessing and waste disposal is included and assumed to be nearly equivalent to LWR, the fuel cycle cost of GTHTR300 was estimated to be about 1.31 yen/kWh. Furthermore, the effects on fuel fabrication cost by such of fuel specification parameters as enrichment, the number of fuel types, and the layer thickness were considered. Even if the enrichment varies from 10 to 20%, the number of fuel types change from 1 to 4, the 1st layer thickness of fuel changes by 30 {mu}m, or the 2nd layer to the 4th layer thickness of fuel changes by 10 {mu}m, the impact on fuel fabrication cost was evaluated to be negligible. (author)

  12. Low Cost Automated Manufacture of High Efficiency THINS ZTJ PV Blanket Technology (P-NASA12-007) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA needs lower cost solar arrays with high performance for a variety of missions. While high efficiency, space-qualified solar cells are in themselves costly, >...

  13. Computer Generated Imagery (CGI) Current Technology and Cost Measures Feasibility Study.

    Science.gov (United States)

    1980-09-26

    Commercial Airlines Organization ( ICAO ) publication. Academia offers another potential source of technology infor- mation, especially with respect to...as Computer Graphics World and the Inter- national Commercial Airlines Organization ( ICAO ) publication. In addition, the proceedings of conferences...and its viewing direction. Eyepoint In a CIG ATD, the eyepoint is the simulated single point location of the observer’s eye relative to a monocular

  14. Scalable Light Module for Low-Cost, High-Efficiency Light- Emitting Diode Luminaires

    Energy Technology Data Exchange (ETDEWEB)

    Tarsa, Eric [Cree, Inc., Goleta, CA (United States)

    2015-08-31

    During this two-year program Cree developed a scalable, modular optical architecture for low-cost, high-efficacy light emitting diode (LED) luminaires. Stated simply, the goal of this architecture was to efficiently and cost-effectively convey light from LEDs (point sources) to broad luminaire surfaces (area sources). By simultaneously developing warm-white LED components and low-cost, scalable optical elements, a high system optical efficiency resulted. To meet program goals, Cree evaluated novel approaches to improve LED component efficacy at high color quality while not sacrificing LED optical efficiency relative to conventional packages. Meanwhile, efficiently coupling light from LEDs into modular optical elements, followed by optimally distributing and extracting this light, were challenges that were addressed via novel optical design coupled with frequent experimental evaluations. Minimizing luminaire bill of materials and assembly costs were two guiding principles for all design work, in the effort to achieve luminaires with significantly lower normalized cost ($/klm) than existing LED fixtures. Chief project accomplishments included the achievement of >150 lm/W warm-white LEDs having primary optics compatible with low-cost modular optical elements. In addition, a prototype Light Module optical efficiency of over 90% was measured, demonstrating the potential of this scalable architecture for ultra-high-efficacy LED luminaires. Since the project ended, Cree has continued to evaluate optical element fabrication and assembly methods in an effort to rapidly transfer this scalable, cost-effective technology to Cree production development groups. The Light Module concept is likely to make a strong contribution to the development of new cost-effective, high-efficacy luminaries, thereby accelerating widespread adoption of energy-saving SSL in the U.S.

  15. Improving the Precision and Speed of Euler Angles Computation from Low-Cost Rotation Sensor Data

    Directory of Open Access Journals (Sweden)

    Aleš Janota

    2015-03-01

    Full Text Available This article compares three different algorithms used to compute Euler angles from data obtained by the angular rate sensor (e.g., MEMS gyroscope—the algorithms based on a rotational matrix, on transforming angular velocity to time derivations of the Euler angles and on unit quaternion expressing rotation. Algorithms are compared by their computational efficiency and accuracy of Euler angles estimation. If attitude of the object is computed only from data obtained by the gyroscope, the quaternion-based algorithm seems to be most suitable (having similar accuracy as the matrix-based algorithm, but taking approx. 30% less clock cycles on the 8-bit microcomputer. Integration of the Euler angles’ time derivations has a singularity, therefore is not accurate at full range of object’s attitude. Since the error in every real gyroscope system tends to increase with time due to its offset and thermal drift, we also propose some measures based on compensation by additional sensors (a magnetic compass and accelerometer. Vector data of mentioned secondary sensors has to be transformed into the inertial frame of reference. While transformation of the vector by the matrix is slightly faster than doing the same by quaternion, the compensated sensor system utilizing a matrix-based algorithm can be approximately 10% faster than the system utilizing quaternions (depending on implementation and hardware.

  16. Improving the precision and speed of Euler angles computation from low-cost rotation sensor data.

    Science.gov (United States)

    Janota, Aleš; Šimák, Vojtech; Nemec, Dušan; Hrbček, Jozef

    2015-03-23

    This article compares three different algorithms used to compute Euler angles from data obtained by the angular rate sensor (e.g., MEMS gyroscope)-the algorithms based on a rotational matrix, on transforming angular velocity to time derivations of the Euler angles and on unit quaternion expressing rotation. Algorithms are compared by their computational efficiency and accuracy of Euler angles estimation. If attitude of the object is computed only from data obtained by the gyroscope, the quaternion-based algorithm seems to be most suitable (having similar accuracy as the matrix-based algorithm, but taking approx. 30% less clock cycles on the 8-bit microcomputer). Integration of the Euler angles' time derivations has a singularity, therefore is not accurate at full range of object's attitude. Since the error in every real gyroscope system tends to increase with time due to its offset and thermal drift, we also propose some measures based on compensation by additional sensors (a magnetic compass and accelerometer). Vector data of mentioned secondary sensors has to be transformed into the inertial frame of reference. While transformation of the vector by the matrix is slightly faster than doing the same by quaternion, the compensated sensor system utilizing a matrix-based algorithm can be approximately 10% faster than the system utilizing quaternions (depending on implementation and hardware).

  17. [VALIDATION OF A COMPUTER PROGRAM FOR DETECTION OF MALNUTRITION HOSPITAL AND ANALYSIS OF HOSPITAL COSTS].

    Science.gov (United States)

    Fernández Valdivia, Antonia; Rodríguez Rodríguez, José María; Valero Aguilera, Beatriz; Lobo Támer, Gabriela; Pérez de la Cruz, Antonio Jesús; García Larios, José Vicente

    2015-07-01

    Introducción: uno de los métodos de diagnóstico de la desnutrición es la albúmina sérica, por la sencillez de su determinación y bajo coste. Objetivos: el objetivo principal es validar e implementar un programa informático, basado en la determinación de albúmina sérica, que permita detectar y tratar precozmente a los pacientes desnutridos o en riesgo de desnutrición, siendo otro objetivo la evaluación de costes por grupos relacionados por el diagnóstico. Métodos: el diseño del estudio es de tipo cohorte, dinámico y prospectivo, en el que se han incluido las altas hospitalarias desde noviembre del año 2012 hasta marzo del año 2014, siendo la población de estudio los pacientes mayores de 14 años que ingresen en los diversos servicios de un Hospital Médico Quirúrgico del Complejo Hospitalario Universitario de Granada, cuyas cifras de albúmina sérica sean menores de 3,5 g/dL, siendo el total de 307 pacientes. Resultados: de los 307 pacientes, 141 presentan desnutrición (sensibilidad del programa: 45,9%). El 54,7% de los pacientes son hombres y el 45,3% mujeres. La edad media es de 65,68 años. La mediana de la estancia es de 16 días. El 13,4% de los pacientes han fallecido. El coste medio de los GRD es de 5.958,30 € y dicho coste medio después de detectar la desnutrición es de 11.376,48 €. Conclusiones: el algoritmo que implementa el programa informático identifica a casi la mitad de los pacientes hospitalizados desnutridos. Es fundamental registrar el diagnóstico de desnutrición.

  18. ICAM (Integrated Computer-Aided Manufacturing) Manufacturing Cost/Design Guide. Volume 6. Project Summary.

    Science.gov (United States)

    1985-01-01

    FIGURE 11. EXAMPLE OF CED FORM.AT FOR ALUMI;NUM, SKIN 3-11 ,. .. >. . 1 .. . . • , 0 0 20 30 40 50 FTR450260000 15 Jan 1985 356-TS/ A356 -T6 ALUMINUM ...of DICE Format for Sheet Metal Parts .. 3-10 11 Example of CED Format for Aluminum Skin. . . . .. 3-11 12 Example of CED Format for Aluminum ...Investment Castings ............. . . . . . 3-12 13 Example of CED Format for Material Cost of Aluminum Extrusions .... ............. ... 3-13 14 Example of

  19. Enabling the ATLAS Experiment at the LHC for High Performance Computing

    CERN Document Server

    AUTHOR|(CDS)2091107; Ereditato, Antonio

    In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relatively small dedicated com...

  20. Quantitative analysis of cholesteatoma using high resolution computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, Shigeru; Yamasoba, Tatsuya (Kameda General Hospital, Chiba (Japan)); Iinuma, Toshitaka

    1992-05-01

    Seventy-three cases of adult cholesteatoma, including 52 cases of pars flaccida type cholesteatoma and 21 of pars tensa type cholesteatoma, were examined using high resolution computed tomography, in both axial (lateral semicircular canal plane) and coronal sections (cochlear, vestibular and antral plane). These cases were classified into two subtypes according to the presence of extension of cholesteatoma into the antrum. Sixty cases of chronic otitis media with central perforation (COM) were also examined as controls. Various locations of the middle ear cavity were measured in terms of size in comparison with pars flaccida type cholesteatoma, pars tensa type cholesteatoma and COM. The width of the attic was significantly larger in both pars flaccida type and pars tensa type cholesteatoma than in COM. With pars flaccida type cholesteatoma there was a significantly larger distance between the malleus and lateral wall of the attic than with COM. In contrast, the distance between the malleus and medial wall of the attic was significantly larger with pars tensa type cholesteatoma than with COM. With cholesteatoma extending into the antrum, regardless of the type of cholesteatoma, there were significantly larger distances than with COM at the following sites: the width and height of the aditus ad antrum, and the width, height and anterior-posterior diameter of the antrum. However, these distances were not significantly different between cholesteatoma without extension into the antrum and COM. The hitherto demonstrated qualitative impressions of bone destruction in cholesteatoma were quantitatively verified in detail using high resolution computed tomography. (author).

  1. Analyzing high energy physics data using database computing: Preliminary report

    Science.gov (United States)

    Baden, Andrew; Day, Chris; Grossman, Robert; Lifka, Dave; Lusk, Ewing; May, Edward; Price, Larry

    1991-01-01

    A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger.

  2. Investigation of Vocational High-School Students' Computer Anxiety

    Science.gov (United States)

    Tuncer, Murat; Dogan, Yunus; Tanas, Ramazan

    2013-01-01

    With the advent of the computer technologies, we are increasingly encountering these technologies in every field of life. The fact that the computer technology is so much interwoven with the daily life makes it necessary to investigate certain psychological attitudes of those working with computers towards computers. As this study is limited to…

  3. Performance and Cost-Effectiveness of Computed Tomography Lung Cancer Screening Scenarios in a Population-Based Setting: A Microsimulation Modeling Analysis in Ontario, Canada

    NARCIS (Netherlands)

    K. ten Haaf (Kevin); M.C. Tammemagi (Martin); Bondy, S.J. (Susan J.); C.M. van der Aalst (Carlijn); Gu, S. (Sumei); McGregor, S.E. (S. Elizabeth); Nicholas, G. (Garth); H.J. de Koning (Harry); L.F. Paszat (Lawrence F.)

    2017-01-01

    textabstractBackground: The National Lung Screening Trial (NLST) results indicate that computed tomography (CT) lung cancer screening for current and former smokers with three annual screens can be cost-effective in a trial setting. However, the cost-effectiveness in a population-based setting with

  4. High Cost/High Risk Components to Chalcogenide Molded Lens Model: Molding Preforms and Mold Technology

    Energy Technology Data Exchange (ETDEWEB)

    Bernacki, Bruce E.

    2012-10-05

    This brief report contains a critique of two key components of FiveFocal's cost model for glass compression molding of chalcogenide lenses for infrared applications. Molding preforms and mold technology have the greatest influence on the ultimate cost of the product and help determine the volumes needed to select glass molding over conventional single-point diamond turning or grinding and polishing. This brief report highlights key areas of both technologies with recommendations for further study.

  5. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    Science.gov (United States)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  6. A low-cost hierarchical nanostructured beta-titanium alloy with high strength.

    Science.gov (United States)

    Devaraj, Arun; Joshi, Vineet V; Srivastava, Ankit; Manandhar, Sandeep; Moxson, Vladimir; Duz, Volodymyr A; Lavender, Curt

    2016-04-01

    Lightweighting of automobiles by use of novel low-cost, high strength-to-weight ratio structural materials can reduce the consumption of fossil fuels and in turn CO2 emission. Working towards this goal we achieved high strength in a low cost β-titanium alloy, Ti-1Al-8V-5Fe (Ti185), by hierarchical nanostructure consisting of homogenous distribution of micron-scale and nanoscale α-phase precipitates within the β-phase matrix. The sequence of phase transformation leading to this hierarchical nanostructure is explored using electron microscopy and atom probe tomography. Our results suggest that the high number density of nanoscale α-phase precipitates in the β-phase matrix is due to ω assisted nucleation of α resulting in high tensile strength, greater than any current commercial titanium alloy. Thus hierarchical nanostructured Ti185 serves as an excellent candidate for replacing costlier titanium alloys and other structural alloys for cost-effective lightweighting applications.

  7. Capital cost: low- and high-sulfur coal plants, 800 MWe

    Energy Technology Data Exchange (ETDEWEB)

    1977-01-01

    This Commercial Electric Power Cost Study for 800-MWe (Nominal) high- and low-sulfur coal plants consists of three volumes. The low-sulfur coal plant is described in Volumes I and II, while Volume III describes the high-sulfur coal plant. The design basis and cost estimate for the 801-MWe low sulfur coal plant is presented in Volume I, and the drawings, equimpment list, and site description are contained in Volume II. The design basis, drawings, and summary cost estimate for a 794-MWe high-sulfur coal plant are presented in Volume III. This information was developed by redesigning the low-sulfur sub-bituminous coal plant for burning high-sulfur bituminous coal. The reference design includes a lime flue-gas desulfurization system. These coal plants utilize a mechanical draft (wet) cooling tower system for condenser heat removal.

  8. Low-cost high-quality crystalline germanium based flexible devices

    KAUST Repository

    Nassar, Joanna M.

    2014-06-16

    High performance flexible electronics promise innovative future technology for various interactive applications for the pursuit of low-cost, light-weight, and multi-functional devices. Thus, here we show a complementary metal oxide semiconductor (CMOS) compatible fabrication of flexible metal-oxide-semiconductor capacitors (MOSCAPs) with high-κ/metal gate stack, using a physical vapor deposition (PVD) cost-effective technique to obtain a high-quality Ge channel. We report outstanding bending radius ~1.25 mm and semi-transparency of 30%.

  9. The high intensity solar cell: Key to low cost photovoltaic power

    Science.gov (United States)

    Sater, B. L.; Goradia, C.

    1975-01-01

    The design considerations and performance characteristics of the 'high intensity' (HI) solar cell are presented. A high intensity solar system was analyzed to determine its cost effectiveness and to assess the benefits of further improving HI cell efficiency. It is shown that residential sized systems can be produced at less than $1000/kW peak electric power. Due to their superior high intensity performance characteristics compared to the conventional and VMJ cells, HI cells and light concentrators may be the key to low cost photovoltaic power.

  10. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  11. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false High performance computers: Post... ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification... certain computers to destinations in Computer Tier 3, see § 740.7(d) for a list of these destinations...

  12. Low–Cost Bio-Based Carbon Fiber for High-Temperature Processing

    Energy Technology Data Exchange (ETDEWEB)

    Naskar, Amit K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Akato, Kokouvi M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tran, Chau D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Paul, Ryan M. [GrafTech International Holdings, Inc., Brooklyn Heights, OH (United States); Dai, Xuliang [GrafTech International Holdings, Inc., Brooklyn Heights, OH (United States)

    2017-02-01

    GrafTech International Holdings Inc. (GTI), worked with Oak Ridge National Laboratory (ORNL) under CRADA No. NFE-15-05807 to develop lignin-based carbon fiber (LBCF) technology and to demonstrate LBCF performance in high-temperature products and applications. This work was unique and different from other reported LBCF work in that this study was application-focused and scalability-focused. Accordingly, the executed work was based on meeting criteria based on technology development, cost, and application suitability. The focus of this work was to demonstrate lab-scale LBCF from at least 4 different precursor feedstock sources that could meet the estimated production cost of $5.00/pound and have ash level of less than 500 ppm in the carbonized insulation-grade fiber. Accordingly, a preliminary cost model was developed based on publicly available information. The team demonstrated that 4 lignin samples met the cost criteria, as highlighted in Table 1. In addition, the ash level for the 4 carbonized lignin samples were below 500 ppm. Processing asreceived lignin to produce a high purity lignin fiber was a significant accomplishment in that most industrial lignin, prior to purification, had greater than 4X the ash level needed for this project, and prior to this work there was not a clear path of how to achieve the purity target. The lab scale development of LBCF was performed with a specific functional application in mind, specifically for high temperature rigid insulation. GTI is currently a consumer of foreignsourced pitch and rayon based carbon fibers for use in its high temperature insulation products, and the motivation was that LBCF had potential to decrease costs and increase product competitiveness in the marketplace through lowered raw material costs, lowered energy costs, and decreased environmental footprint. At the end of this project, the Technology Readiness Level (TRL) remained at 5 for LBCF in high temperature insulation.

  13. Proceedings of the workshop on high resolution computed microtomography (CMT)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    The purpose of the workshop was to determine the status of the field, to define instrumental and computational requirements, and to establish minimum specifications required by possible users. The most important message sent by implementers was the remainder that CMT is a tool. It solves a wide spectrum of scientific problems and is complementary to other microscopy techniques, with certain important advantages that the other methods do not have. High-resolution CMT can be used non-invasively and non-destructively to study a variety of hierarchical three-dimensional microstructures, which in turn control body function. X-ray computed microtomography can also be used at the frontiers of physics, in the study of granular systems, for example. With high-resolution CMT, for example, three-dimensional pore geometries and topologies of soils and rocks can be obtained readily and implemented directly in transport models. In turn, these geometries can be used to calculate fundamental physical properties, such as permeability and electrical conductivity, from first principles. Clearly, use of the high-resolution CMT technique will contribute tremendously to the advancement of current R and D technologies in the production, transport, storage, and utilization of oil and natural gas. It can also be applied to problems related to environmental pollution, particularly to spilling and seepage of hazardous chemicals into the Earth's subsurface. Applications to energy and environmental problems will be far-ranging and may soon extend to disciplines such as materials science--where the method can be used in the manufacture of porous ceramics, filament-resin composites, and microelectronics components--and to biomedicine, where it could be used to design biocompatible materials such as artificial bones, contact lenses, or medication-releasing implants. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  14. Low-Cost Bio-Based Carbon Fibers for High Temperature Processing

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Ryan Michael [GrafTech International, Brooklyn Heights, OH (United States); Naskar, Amit [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-08-03

    GrafTech International Holdings Inc. (GTI), under Award No. DE-EE0005779, worked with Oak Ridge National Laboratory (ORNL) under CRADA No. NFE-15-05807 to develop lignin-based carbon fiber (LBCF) technology and to demonstrate LBCF performance in high-temperature products and applications. This work was unique and different from other reported LBCF work in that this study was application-focused and scalability-focused. Accordingly, the executed work was based on meeting criteria based on technology development, cost, and application suitability. High-temperature carbon fiber based insulation is used in energy intensive industries, such as metal heat treating and ceramic and semiconductor material production. Insulation plays a critical role in achieving high thermal and process efficiency, which is directly related to energy usage, cost, and product competitiveness. Current high temperature insulation is made with petroleum based carbon fibers, and one goal of this protect was to develop and demonstrate an alternative lignin (biomass) based carbon fiber that would achieve lower cost, CO2 emissions, and energy consumption and result in insulation that met or exceeded the thermal efficiency of current commercial insulation. In addition, other products were targeted to be evaluated with LBCF. As the project was designed to proceed in stages, the initial focus of this work was to demonstrate lab-scale LBCF from at least 4 different lignin precursor feedstock sources that could meet the estimated production cost of $5.00/pound and have ash level of less than 500 ppm in the carbonized insulation-grade fiber. Accordingly, a preliminary cost model was developed based on publicly available information. The team demonstrated that 4 lignin samples met the cost criteria. In addition, the ash level for the 4 carbonized lignin samples was below 500 ppm. Processing as-received lignin to produce a high purity lignin fiber was a significant accomplishment in that most industrial

  15. Computation of High-Frequency Waves with Random Uncertainty

    KAUST Repository

    Malenova, Gabriela

    2016-01-06

    We consider the forward propagation of uncertainty in high-frequency waves, described by the second order wave equation with highly oscillatory initial data. The main sources of uncertainty are the wave speed and/or the initial phase and amplitude, described by a finite number of random variables with known joint probability distribution. We propose a stochastic spectral asymptotic method [1] for computing the statistics of uncertain output quantities of interest (QoIs), which are often linear or nonlinear functionals of the wave solution and its spatial/temporal derivatives. The numerical scheme combines two techniques: a high-frequency method based on Gaussian beams [2, 3], a sparse stochastic collocation method [4]. The fast spectral convergence of the proposed method depends crucially on the presence of high stochastic regularity of the QoI independent of the wave frequency. In general, the high-frequency wave solutions to parametric hyperbolic equations are highly oscillatory and non-smooth in both physical and stochastic spaces. Consequently, the stochastic regularity of the QoI, which is a functional of the wave solution, may in principle below and depend on frequency. In the present work, we provide theoretical arguments and numerical evidence that physically motivated QoIs based on local averages of |uE|2 are smooth, with derivatives in the stochastic space uniformly bounded in E, where uE and E denote the highly oscillatory wave solution and the short wavelength, respectively. This observable related regularity makes the proposed approach more efficient than current asymptotic approaches based on Monte Carlo sampling techniques.

  16. Computational modeling of high pressure combustion mechanism in scram accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.Y. [Pusan Nat. Univ. (Korea); Lee, B.J. [Pusan Nat. Univ. (Korea); Agency for Defense Development, Taejon (Korea); Jeung, I.S. [Pusan Nat. Univ. (Korea); Seoul National Univ. (Korea). Dept. of Aerospace Engineering

    2000-11-01

    A computational study was carried out to analyze a high-pressure combustion in scram accelerator. Fluid dynamic modeling was based on RANS equations for reactive flows, which were solved in a fully coupled manner using a fully implicit-upwind TVD scheme. For the accurate simulation of high-pressure combustion in ram accelerator, 9-species, 25-step fully detailed reaction mechanism was incorporated with the existing CFD code previously used for the ram accelerator studies. The mechanism is based on GRI-Mech. 2.11 that includes pressure-dependent reaction rate formulations indispensable for the correct prediction of induction time in high-pressure environment. A real gas equation of state was also included to account for molecular interactions and real gas effects of high-pressure gases. The present combustion modeling is compared with previous 8-step and 19-step mechanisms with ideal gas assumption. The result shows that mixture ignition characteristics are very sensitive to the combustion mechanisms, and different mechanism results in different reactive flow-field characteristics that have a significant relevance to the operation mode and the performance of scram accelerator. (orig.)

  17. Avoiding the Enumeration of Infeasible Elementary Flux Modes by Including Transcriptional Regulatory Rules in the Enumeration Process Saves Computational Costs.

    Directory of Open Access Journals (Sweden)

    Christian Jungreuthmayer

    Full Text Available Despite the significant progress made in recent years, the computation of the complete set of elementary flux modes of large or even genome-scale metabolic networks is still impossible. We introduce a novel approach to speed up the calculation of elementary flux modes by including transcriptional regulatory information into the analysis of metabolic networks. Taking into account gene regulation dramatically reduces the solution space and allows the presented algorithm to constantly eliminate biologically infeasible modes at an early stage of the computation procedure. Thereby, computational costs, such as runtime, memory usage, and disk space, are extremely reduced. Moreover, we show that the application of transcriptional rules identifies non-trivial system-wide effects on metabolism. Using the presented algorithm pushes the size of metabolic networks that can be studied by elementary flux modes to new and much higher limits without the loss of predictive quality. This makes unbiased, system-wide predictions in large scale metabolic networks possible without resorting to any optimization principle.

  18. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    Science.gov (United States)

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  19. Computational Fluid Dynamics Analysis of High Injection Pressure Blended Biodiesel

    Science.gov (United States)

    Khalid, Amir; Jaat, Norrizam; Faisal Hushim, Mohd; Manshoor, Bukhari; Zaman, Izzuddin; Sapit, Azwan; Razali, Azahari

    2017-08-01

    Biodiesel have great potential for substitution with petrol fuel for the purpose of achieving clean energy production and emission reduction. Among the methods that can control the combustion properties, controlling of the fuel injection conditions is one of the successful methods. The purpose of this study is to investigate the effect of high injection pressure of biodiesel blends on spray characteristics using Computational Fluid Dynamics (CFD). Injection pressure was observed at 220 MPa, 250 MPa and 280 MPa. The ambient temperature was kept held at 1050 K and ambient pressure 8 MPa in order to simulate the effect of boost pressure or turbo charger during combustion process. Computational Fluid Dynamics were used to investigate the spray characteristics of biodiesel blends such as spray penetration length, spray angle and mixture formation of fuel-air mixing. The results shows that increases of injection pressure, wider spray angle is produced by biodiesel blends and diesel fuel. The injection pressure strongly affects the mixture formation, characteristics of fuel spray, longer spray penetration length thus promotes the fuel and air mixing.

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...