WorldWideScience

Sample records for high computational cost

  1. Low cost highly available digital control computer

    International Nuclear Information System (INIS)

    Silvers, M.W.

    1986-01-01

    When designing digital controllers for critical plant control it is important to provide several features. Among these are reliability, availability, maintainability, environmental protection, and low cost. An examination of several applications has lead to a design that can be produced for approximately $20,000 (1000 control points). This design is compatible with modern concepts in distributed and hierarchical control. The canonical controller element is a dual-redundant self-checking computer that communicates with a cross-strapped, electrically isolated input/output system. The input/output subsystem comprises multiple intelligent input/output cards. These cards accept commands from the primary processor which are validated, executed, and acknowledged. Each card may be hot replaced to facilitate sparing. The implementation of the dual-redundant computer architecture is discussed. Called the FS-86, this computer can be used for a variety of applications. It has most recently found application in the upgrade of San Francisco's Bay Area Rapid Transit (BART) train control currently in progress and has been proposed for feedwater control in a boiling water reactor

  2. The Optimal Pricing of Computer Software and Other Products with High Switching Costs

    OpenAIRE

    Pekka Ahtiala

    2004-01-01

    The paper studies the determinants of the optimum prices of computer programs and their upgrades. It is based on the notion that because of the human capital invested in the use of a computer program by its user, this product has high switching costs, and on the finding that pirates are responsible for generating over 80 per cent of new software sales. A model to maximize the present value of the program to the program house is constructed to determine the optimal prices of initial programs a...

  3. Low-cost, high-performance and efficiency computational photometer design

    Science.gov (United States)

    Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly

    2014-05-01

    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.

  4. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    Energy Technology Data Exchange (ETDEWEB)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  5. Positron emission tomography/computed tomography surveillance in patients with Hodgkin lymphoma in first remission has a low positive predictive value and high costs.

    Science.gov (United States)

    El-Galaly, Tarec Christoffer; Mylam, Karen Juul; Brown, Peter; Specht, Lena; Christiansen, Ilse; Munksgaard, Lars; Johnsen, Hans Erik; Loft, Annika; Bukh, Anne; Iyer, Victor; Nielsen, Anne Lerberg; Hutchings, Martin

    2012-06-01

    The value of performing post-therapy routine surveillance imaging in patients with Hodgkin lymphoma is controversial. This study evaluates the utility of positron emission tomography/computed tomography using 2-[18F]fluoro-2-deoxyglucose for this purpose and in situations with suspected lymphoma relapse. We conducted a multicenter retrospective study. Patients with newly diagnosed Hodgkin lymphoma achieving at least a partial remission on first-line therapy were eligible if they received positron emission tomography/computed tomography surveillance during follow-up. Two types of imaging surveillance were analyzed: "routine" when patients showed no signs of relapse at referral to positron emission tomography/computed tomography, and "clinically indicated" when recurrence was suspected. A total of 211 routine and 88 clinically indicated positron emission tomography/computed tomography studies were performed in 161 patients. In ten of 22 patients with recurrence of Hodgkin lymphoma, routine imaging surveillance was the primary tool for the diagnosis of the relapse. Extranodal disease, interim positron emission tomography-positive lesions and positron emission tomography activity at response evaluation were all associated with a positron emission tomography/computed tomography-diagnosed preclinical relapse. The true positive rates of routine and clinically indicated imaging were 5% and 13%, respectively (P = 0.02). The overall positive predictive value and negative predictive value of positron emission tomography/computed tomography were 28% and 100%, respectively. The estimated cost per routine imaging diagnosed relapse was US$ 50,778. Negative positron emission tomography/computed tomography reliably rules out a relapse. The high false positive rate is, however, an important limitation and a confirmatory biopsy is mandatory for the diagnosis of a relapse. With no proven survival benefit for patients with a pre-clinically diagnosed relapse, the high costs and low

  6. Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics.

    Science.gov (United States)

    Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo

    2016-01-01

    The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.

  7. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    Science.gov (United States)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  8. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  9. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Watase, Yoshiyuki

    1991-09-15

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors.

  10. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  11. Cloud Computing-An Ultimate Technique to Minimize Computing cost for Developing Countries

    OpenAIRE

    Narendra Kumar; Shikha Jain

    2012-01-01

    The presented paper deals with how remotely managed computing and IT resources can be beneficial in the developing countries like India and Asian sub-continent countries. This paper not only defines the architectures and functionalities of cloud computing but also indicates strongly about the current demand of Cloud computing to achieve organizational and personal level of IT supports in very minimal cost with high class flexibility. The power of cloud can be used to reduce the cost of IT - r...

  12. Incremental ALARA cost/benefit computer analysis

    International Nuclear Information System (INIS)

    Hamby, P.

    1987-01-01

    Commonwealth Edison Company has developed and is testing an enhanced Fortran Computer Program to be used for cost/benefit analysis of Radiation Reduction Projects at its six nuclear power facilities and Corporate Technical Support Groups. This paper describes a Macro-Diven IBM Mainframe Program comprised of two different types of analyses-an Abbreviated Program with fixed costs and base values, and an extended Engineering Version for a detailed, more through and time-consuming approach. The extended engineering version breaks radiation exposure costs down into two components-Health-Related Costs and Replacement Labor Costs. According to user input, the program automatically adjust these two cost components and applies the derivation to company economic analyses such as replacement power costs, carrying charges, debt interest, and capital investment cost. The results from one of more program runs using different parameters may be compared in order to determine the most appropriate ALARA dose reduction technique. Benefits of this particular cost / benefit analysis technique includes flexibility to accommodate a wide range of user data and pre-job preparation, as well as the use of proven and standardized company economic equations

  13. Computer Software for Life Cycle Cost.

    Science.gov (United States)

    1987-04-01

    34 111. 1111I .25 IL4 jj 16 MICROCOPY RESOLUTION TEST CHART hut FILE C AIR CoMMNAMN STFF COLLG STUJDET PORTO i COMpUTER SOFTWARE FOR LIFE CYCLE CO879...obsolete), physical life (utility before physically wearing out), or application life (utility in a given function)." (7:5) The costs are usually

  14. The Hidden Cost of Buying a Computer.

    Science.gov (United States)

    Johnson, Michael

    1983-01-01

    In order to process data in a computer, application software must be either developed or purchased. Costs for modifications of the software package and maintenance are often hidden. The decision to buy or develop software packages should be based upon factors of time and maintenance. (MLF)

  15. Role of information systems in controlling costs: the electronic medical record (EMR) and the high-performance computing and communications (HPCC) efforts

    Science.gov (United States)

    Kun, Luis G.

    1994-12-01

    On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.

  16. Computing in high energy physics

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1991-01-01

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors

  17. Cost/Benefit Analysis of Leasing Versus Purchasing Computers

    National Research Council Canada - National Science Library

    Arceneaux, Alan

    1997-01-01

    .... In constructing this model, several factors were considered, including: The purchase cost of computer equipment, annual lease payments, depreciation costs, the opportunity cost of purchasing, tax revenue implications and various leasing terms...

  18. Cost-Benefit Analysis of Computer Resources for Machine Learning

    Science.gov (United States)

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  19. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  20. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  1. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Sarah; Devenish, Robin [Nuclear Physics Laboratory, Oxford University (United Kingdom)

    1989-07-15

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'.

  2. Computing in high energy physics

    International Nuclear Information System (INIS)

    Smith, Sarah; Devenish, Robin

    1989-01-01

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'

  3. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  4. INSPIRED High School Computing Academies

    Science.gov (United States)

    Doerschuk, Peggy; Liu, Jiangjiang; Mann, Judith

    2011-01-01

    If we are to attract more women and minorities to computing we must engage students at an early age. As part of its mission to increase participation of women and underrepresented minorities in computing, the Increasing Student Participation in Research Development Program (INSPIRED) conducts computing academies for high school students. The…

  5. Cloud Computing and Information Technology Resource Cost Management for SMEs

    DEFF Research Database (Denmark)

    Kuada, Eric; Adanu, Kwame; Olesen, Henning

    2013-01-01

    This paper analyzes the decision-making problem confronting SMEs considering the adoption of cloud computing as an alternative to in-house computing services provision. The economics of choosing between in-house computing and a cloud alternative is analyzed by comparing the total economic costs...... in determining the relative value of cloud computing....

  6. Computing in high-energy physics

    International Nuclear Information System (INIS)

    Mount, Richard P.

    2016-01-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software

  7. Computing in high-energy physics

    Science.gov (United States)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  8. Computing in high energy physics

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Hoogland, W.

    1986-01-01

    This book deals with advanced computing applications in physics, and in particular in high energy physics environments. The main subjects covered are networking; vector and parallel processing; and embedded systems. Also examined are topics such as operating systems, future computer architectures and commercial computer products. The book presents solutions that are foreseen as coping, in the future, with computing problems in experimental and theoretical High Energy Physics. In the experimental environment the large amounts of data to be processed offer special problems on-line as well as off-line. For on-line data reduction, embedded special purpose computers, which are often used for trigger applications are applied. For off-line processing, parallel computers such as emulator farms and the cosmic cube may be employed. The analysis of these topics is therefore a main feature of this volume

  9. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.; Curioni, A.; Fedulova, I.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost

  10. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    Science.gov (United States)

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  11. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  12. Computer tomography: a cost-saving examination?

    International Nuclear Information System (INIS)

    Barneveld Binkhuysen, F.H.; Puijlaert, C.B.A.J.

    1987-01-01

    The research concerns the influence of the body computer tomograph (BCT) on the efficiency in radiology and in the hospital as a whole in The Netherlands. Hospitals with CT are compared with hospitals without CT. In radiology the substitution effect is investigated, with use of the number of radiological performances per clinical patient as a parameter. This parameter proves to decrease in hospitals with a CT, in contrast to hospitals without a CT. The often-expressed opinion that the CT should specifically perform complementary examinations appears incorrect. As to the efficiency in the hospital this is related to the average hospital in-patient stay. The average hospital in-patient stay proves to be shorter in hospitals with a CT than in those without a CT. The CT has turned out to be a very effective expedient which unfortunately, however, is being used inefficiently in The Netherlands, owing to limited installation. 17 refs.; 6 figs.; 5 tabs

  13. Personal computers in high energy physics

    International Nuclear Information System (INIS)

    Quarrie, D.R.

    1987-01-01

    The role of personal computers within HEP is expanding as their capabilities increase and their cost decreases. Already they offer greater flexibility than many low-cost graphics terminals for a comparable cost and in addition they can significantly increase the productivity of physicists and programmers. This talk will discuss existing uses for personal computers and explore possible future directions for their integration into the overall computing environment. (orig.)

  14. Computer controlled high voltage system

    Energy Technology Data Exchange (ETDEWEB)

    Kunov, B; Georgiev, G; Dimitrov, L [and others

    1996-12-31

    A multichannel computer controlled high-voltage power supply system is developed. The basic technical parameters of the system are: output voltage -100-3000 V, output current - 0-3 mA, maximum number of channels in one crate - 78. 3 refs.

  15. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  16. High cost for drilling ships

    International Nuclear Information System (INIS)

    Hooghiemstra, J.

    2007-01-01

    Prices for the rent of a drilling ship are very high. Per day the rent is 1% of the price for building such a ship, and those prices have risen as well. Still, it is attractive for oil companies to rent a drilling ship [nl

  17. Software Requirements for a System to Compute Mean Failure Cost

    Energy Technology Data Exchange (ETDEWEB)

    Aissa, Anis Ben [University of Tunis, Belvedere, Tunisia; Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL; Mili, Ali [New Jersey Insitute of Technology

    2010-01-01

    In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder. We also demonstrated this infrastructure through the results of security breakdowns for the ecommerce case. In this paper, we illustrate this infrastructure by an application that supports the computation of the Mean Failure Cost (MFC) for each stakeholder.

  18. High-cost users of medical care

    OpenAIRE

    Garfinkel, Steven A.; Riley, Gerald F.; Iannacchione, Vincent G.

    1988-01-01

    Based on data from the National Medical Care Utilization and Expenditure Survey, the 10 percent of the noninstitutionalized U.S. population that incurred the highest medical care charges was responsible for 75 percent of all incurred charges. Health status was the strongest predictor of high-cost use, followed by economic factors. Persons 65 years of age or over incurred far higher costs than younger persons and had higher out-of-pocket costs, absolutely and as a percentage of income, althoug...

  19. Cost-effectiveness analysis of computer-based assessment

    Directory of Open Access Journals (Sweden)

    Pauline Loewenberger

    2003-12-01

    Full Text Available The need for more cost-effective and pedagogically acceptable combinations of teaching and learning methods to sustain increasing student numbers means that the use of innovative methods, using technology, is accelerating. There is an expectation that economies of scale might provide greater cost-effectiveness whilst also enhancing student learning. The difficulties and complexities of these expectations are considered in this paper, which explores the challenges faced by those wishing to evaluate the costeffectiveness of computer-based assessment (CBA. The paper outlines the outcomes of a survey which attempted to gather information about the costs and benefits of CBA.

  20. The high cost of conflict.

    Science.gov (United States)

    Forté, P S

    1997-01-01

    Conflict is inevitable, especially in highly stressed environments. Clinical environments marked by nurse-physician conflict (and nurse withdrawal related to conflict avoidance) have been proven to be counterproductive to patients. Clinical environments with nurse-physician professional collegiality and respectful communication show decreased patient morbidity and mortality, thus enhancing outcomes. The growth of managed care, and the organizational turmoil associated with rapid change, makes it imperative to structure the health care environment so that conflict can be dealt with in a safe and healthy manner. Professional health care education programs and employers have a responsibility to provide interactive opportunities for multidisciplinary audiences through which conflict management skills can be learned and truly change the interpersonal environment. Professionals must be free to focus their energy on the needs of the patient, not on staff difficulties.

  1. Low-cost high purity production

    Science.gov (United States)

    Kapur, V. K.

    1978-01-01

    Economical process produces high-purity silicon crystals suitable for use in solar cells. Reaction is strongly exothermic and can be initiated at relatively low temperature, making it potentially suitable for development into low-cost commercial process. Important advantages include exothermic character and comparatively low process temperatures. These could lead to significant savings in equipment and energy costs.

  2. Development of computer program for estimating decommissioning cost - 59037

    International Nuclear Information System (INIS)

    Kim, Hak-Soo; Park, Jong-Kil

    2012-01-01

    The programs for estimating the decommissioning cost have been developed for many different purposes and applications. The estimation of decommissioning cost is required a large amount of data such as unit cost factors, plant area and its inventory, waste treatment, etc. These make it difficult to use manual calculation or typical spreadsheet software such as Microsoft Excel. The cost estimation for eventual decommissioning of nuclear power plants is a prerequisite for safe, timely and cost-effective decommissioning. To estimate the decommissioning cost more accurately and systematically, KHNP, Korea Hydro and Nuclear Power Co. Ltd, developed a decommissioning cost estimating computer program called 'DeCAT-Pro', which is Decommission-ing Cost Assessment Tool - Professional. (Hereinafter called 'DeCAT') This program allows users to easily assess the decommissioning cost with various decommissioning options. Also, this program provides detailed reporting for decommissioning funding requirements as well as providing detail project schedules, cash-flow, staffing plan and levels, and waste volumes by waste classifications and types. KHNP is planning to implement functions for estimating the plant inventory using 3-D technology and for classifying the conditions of radwaste disposal and transportation automatically. (authors)

  3. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sø rbye, Sigrunn H.; Myrvoll-Nilsen, Eirik; Rue, Haavard

    2017-01-01

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood

  4. Cost-effectiveness of PET and PET/Computed Tomography

    DEFF Research Database (Denmark)

    Gerke, Oke; Hermansson, Ronnie; Hess, Søren

    2015-01-01

    measure by means of incremental cost-effectiveness ratios when considering the replacement of the standard regimen by a new diagnostic procedure. This article discusses economic assessments of PET and PET/computed tomography reported until mid-July 2014. Forty-seven studies on cancer and noncancer...

  5. A survey of cost accounting in service-oriented computing

    NARCIS (Netherlands)

    de Medeiros, Robson W.A.; Rosa, Nelson S.; Campos, Glaucia M.M.; Ferreira Pires, Luis

    Nowadays, companies are increasingly offering their business services through computational services on the Internet in order to attract more customers and increase their revenues. However, these services have financial costs that need to be managed in order to maximize profit. Several models and

  6. Low cost spacecraft computers: Oxymoron or future trend?

    Science.gov (United States)

    Manning, Robert M.

    1993-01-01

    Over the last few decades, application of current terrestrial computer technology in embedded spacecraft control systems has been expensive and wrought with many technical challenges. These challenges have centered on overcoming the extreme environmental constraints (protons, neutrons, gamma radiation, cosmic rays, temperature, vibration, etc.) that often preclude direct use of commercial off-the-shelf computer technology. Reliability, fault tolerance and power have also greatly constrained the selection of spacecraft control system computers. More recently, new constraints are being felt, cost and mass in particular, that have again narrowed the degrees of freedom spacecraft designers once enjoyed. This paper discusses these challenges, how they were previously overcome, how future trends in commercial computer technology will simplify (or hinder) selection of computer technology for spacecraft control applications, and what spacecraft electronic system designers can do now to circumvent them.

  7. High speed computer assisted tomography

    International Nuclear Information System (INIS)

    Maydan, D.; Shepp, L.A.

    1980-01-01

    X-ray generation and detection apparatus for use in a computer assisted tomography system which permits relatively high speed scanning. A large x-ray tube having a circular anode (3) surrounds the patient area. A movable electron gun (8) orbits adjacent to the anode. The anode directs into the patient area xrays which are delimited into a fan beam by a pair of collimating rings (21). After passing through the patient, x-rays are detected by an array (22) of movable detectors. Detector subarrays (23) are synchronously movable out of the x-ray plane to permit the passage of the fan beam

  8. Computer simulation at high pressure

    International Nuclear Information System (INIS)

    Alder, B.J.

    1977-11-01

    The use of either the Monte Carlo or molecular dynamics method to generate equations-of-state data for various materials at high pressure is discussed. Particular emphasis is given to phase diagrams, such as the generation of various types of critical lines for mixtures, melting, structural and electronic transitions in solids, two-phase ionic fluid systems of astrophysical interest, as well as a brief aside of possible eutectic behavior in the interior of the earth. Then the application of the molecular dynamics method to predict transport coefficients and the neutron scattering function is discussed with a view as to what special features high pressure brings out. Lastly, an analysis by these computational methods of the measured intensity and frequency spectrum of depolarized light and also of the deviation of the dielectric measurements from the constancy of the Clausius--Mosotti function is given that leads to predictions of how the electronic structure of an atom distorts with pressure

  9. Cost optimisation studies of high power accelerators

    Energy Technology Data Exchange (ETDEWEB)

    McAdams, R.; Nightingale, M.P.S.; Godden, D. [AEA Technology, Oxon (United Kingdom)] [and others

    1995-10-01

    Cost optimisation studies are carried out for an accelerator based neutron source consisting of a series of linear accelerators. The characteristics of the lowest cost design for a given beam current and energy machine such as power and length are found to depend on the lifetime envisaged for it. For a fixed neutron yield it is preferable to have a low current, high energy machine. The benefits of superconducting technology are also investigated. A Separated Orbit Cyclotron (SOC) has the potential to reduce capital and operating costs and intial estimates for the transverse and longitudinal current limits of such machines are made.

  10. High Performance Spaceflight Computing (HPSC)

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-based computing has not kept up with the needs of current and future NASA missions. We are developing a next-generation flight computing system that addresses...

  11. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  12. Cost/benefit of high technology in diagnostic radiology

    Energy Technology Data Exchange (ETDEWEB)

    Goethlin, J.H.

    1987-08-01

    High technology is frequently blamed as a main cause for the last decade's disproportionate rise in health expenditure. Total costs for all large diagnostic and therapeutic appliances are typically less than 1% of annual expenditure on health care. CT, DSA, MRI, interventional radiology, ESWL, US, mammography, computers in radiology and PACS may save 10-80% of total cost for diagnosis and treatment of disease. Expenditure on high technology is in general vastly overestimated. Because of its medical utility, a slower deployment cannot be desirable. (orig.)

  13. Cost/benefit of high technology in diagnostic radiology

    International Nuclear Information System (INIS)

    Goethlin, J.H.

    1987-01-01

    High technology is frequently blamed as a main cause for the last decade's disproportionate rise in health expenditure. Total costs for all large diagnostic and therapeutic appliances are typically less than 1% of annual expenditure on health care. CT, DSA, MRI, interventional radiology, ESWL, US, mammography, computers in radiology and PACS may save 10-80% of total cost for diagnosis and treatment of disease. Expenditure on high technology is in general vastly overestimated. Because of its medical utility, a slower deployment cannot be desirable. (orig.)

  14. User manual for PACTOLUS: a code for computing power costs

    International Nuclear Information System (INIS)

    Huber, H.D.; Bloomster, C.H.

    1979-02-01

    PACTOLUS is a computer code for calculating the cost of generating electricity. Through appropriate definition of the input data, PACTOLUS can calculate the cost of generating electricity from a wide variety of power plants, including nuclear, fossil, geothermal, solar, and other types of advanced energy systems. The purpose of PACTOLUS is to develop cash flows and calculate the unit busbar power cost (mills/kWh) over the entire life of a power plant. The cash flow information is calculated by two principal models: the Fuel Model and the Discounted Cash Flow Model. The Fuel Model is an engineering cost model which calculates the cash flow for the fuel cycle costs over the project lifetime based on input data defining the fuel material requirements, the unit costs of fuel materials and processes, the process lead and lag times, and the schedule of the capacity factor for the plant. For nuclear plants, the Fuel Model calculates the cash flow for the entire nuclear fuel cycle. For fossil plants, the Fuel Model calculates the cash flow for the fossil fuel purchases. The Discounted Cash Flow Model combines the fuel costs generated by the Fuel Model with input data on the capital costs, capital structure, licensing time, construction time, rates of return on capital, tax rates, operating costs, and depreciation method of the plant to calculate the cash flow for the entire lifetime of the project. The financial and tax structure for both investor-owned utilities and municipal utilities can be simulated through varying the rates of return on equity and debt, the debt-equity ratios, and tax rates. The Discounted Cash Flow Model uses the principal that the present worth of the revenues will be equal to the present worth of the expenses including the return on investment over the economic life of the project. This manual explains how to prepare the input data, execute cases, and interpret the output results with the updated version of PACTOLUS. 11 figures, 2 tables

  15. User manual for PACTOLUS: a code for computing power costs.

    Energy Technology Data Exchange (ETDEWEB)

    Huber, H.D.; Bloomster, C.H.

    1979-02-01

    PACTOLUS is a computer code for calculating the cost of generating electricity. Through appropriate definition of the input data, PACTOLUS can calculate the cost of generating electricity from a wide variety of power plants, including nuclear, fossil, geothermal, solar, and other types of advanced energy systems. The purpose of PACTOLUS is to develop cash flows and calculate the unit busbar power cost (mills/kWh) over the entire life of a power plant. The cash flow information is calculated by two principal models: the Fuel Model and the Discounted Cash Flow Model. The Fuel Model is an engineering cost model which calculates the cash flow for the fuel cycle costs over the project lifetime based on input data defining the fuel material requirements, the unit costs of fuel materials and processes, the process lead and lag times, and the schedule of the capacity factor for the plant. For nuclear plants, the Fuel Model calculates the cash flow for the entire nuclear fuel cycle. For fossil plants, the Fuel Model calculates the cash flow for the fossil fuel purchases. The Discounted Cash Flow Model combines the fuel costs generated by the Fuel Model with input data on the capital costs, capital structure, licensing time, construction time, rates of return on capital, tax rates, operating costs, and depreciation method of the plant to calculate the cash flow for the entire lifetime of the project. The financial and tax structure for both investor-owned utilities and municipal utilities can be simulated through varying the rates of return on equity and debt, the debt-equity ratios, and tax rates. The Discounted Cash Flow Model uses the principal that the present worth of the revenues will be equal to the present worth of the expenses including the return on investment over the economic life of the project. This manual explains how to prepare the input data, execute cases, and interpret the output results. (RWR)

  16. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sørbye, Sigrunn H.

    2017-09-18

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood-based approach is ${\\\\mathcal O}(n^{2})$, exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to ${\\\\mathcal O}(n^{3})$. This paper presents an approximate fGn model of ${\\\\mathcal O}(n)$ computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent first-order autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.

  17. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  18. Client-server computer architecture saves costs and eliminates bottlenecks

    International Nuclear Information System (INIS)

    Darukhanavala, P.P.; Davidson, M.C.; Tyler, T.N.; Blaskovich, F.T.; Smith, C.

    1992-01-01

    This paper reports that workstation, client-server architecture saved costs and eliminated bottlenecks that BP Exploration (Alaska) Inc. experienced with mainframe computer systems. In 1991, BP embarked on an ambitious project to change technical computing for its Prudhoe Bay, Endicott, and Kuparuk operations on Alaska's North Slope. This project promised substantial rewards, but also involved considerable risk. The project plan called for reservoir simulations (which historically had run on a Cray Research Inc. X-MP supercomputer in the company's Houston data center) to be run on small computer workstations. Additionally, large Prudhoe Bay, Endicott, and Kuparuk production and reservoir engineering data bases and related applications also would be moved to workstations, replacing a Digital Equipment Corp. VAX cluster in Anchorage

  19. Addressing the computational cost of large EIT solutions

    International Nuclear Information System (INIS)

    Boyle, Alistair; Adler, Andy; Borsic, Andrea

    2012-01-01

    Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection. (paper)

  20. Addressing the computational cost of large EIT solutions.

    Science.gov (United States)

    Boyle, Alistair; Borsic, Andrea; Adler, Andy

    2012-05-01

    Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.

  1. Estimating pressurized water reactor decommissioning costs: A user's manual for the PWR Cost Estimating Computer Program (CECP) software

    International Nuclear Information System (INIS)

    Bierschbach, M.C.; Mencinsky, G.J.

    1993-10-01

    With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the US Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user's manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personnel computer, provides estimates for the cost of decommissioning PWR plant stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning

  2. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  3. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  4. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  5. High accuracy ion optics computing

    International Nuclear Information System (INIS)

    Amos, R.J.; Evans, G.A.; Smith, R.

    1986-01-01

    Computer simulation of focused ion beams for surface analysis of materials by SIMS, or for microfabrication by ion beam lithography plays an important role in the design of low energy ion beam transport and optical systems. Many computer packages currently available, are limited in their applications, being inaccurate or inappropriate for a number of practical purposes. This work describes an efficient and accurate computer programme which has been developed and tested for use on medium sized machines. The programme is written in Algol 68 and models the behaviour of a beam of charged particles through an electrostatic system. A variable grid finite difference method is used with a unique data structure, to calculate the electric potential in an axially symmetric region, for arbitrary shaped boundaries. Emphasis has been placed upon finding an economic method of solving the resulting set of sparse linear equations in the calculation of the electric field and several of these are described. Applications include individual ion lenses, extraction optics for ions in surface analytical instruments and the design of columns for ion beam lithography. Computational results have been compared with analytical calculations and with some data obtained from individual einzel lenses. (author)

  6. Low Cost, Low Power, High Sensitivity Magnetometer

    Science.gov (United States)

    2008-12-01

    which are used to measure the small magnetic signals from brain. Other types of vector magnetometers are fluxgate , coil based, and magnetoresistance...concentrator with the magnetometer currently used in Army multimodal sensor systems, the Brown fluxgate . One sees the MEMS fluxgate magnetometer is...Guedes, A.; et al., 2008: Hybrid - LOW COST, LOW POWER, HIGH SENSITIVITY MAGNETOMETER A.S. Edelstein*, James E. Burnette, Greg A. Fischer, M.G

  7. Can We Build a Truly High Performance Computer Which is Flexible and Transparent?

    KAUST Repository

    Rojas, Jhonathan Prieto; Sevilla, Galo T.; Hussain, Muhammad Mustafa

    2013-01-01

    cost advantage. In that context, low-cost mono-crystalline bulk silicon (100) based high performance transistors are considered as the heart of today's computers. One limitation is silicon's rigidity and brittleness. Here we show a generic batch process

  8. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  9. High Efficiency, Low Cost Scintillators for PET

    International Nuclear Information System (INIS)

    Kanai Shah

    2007-01-01

    Inorganic scintillation detectors coupled to PMTs are an important element of medical imaging applications such as positron emission tomography (PET). Performance as well as cost of these systems is limited by the properties of the scintillation detectors available at present. The Phase I project was aimed at demonstrating the feasibility of producing high performance scintillators using a low cost fabrication approach. Samples of these scintillators were produced and their performance was evaluated. Overall, the Phase I effort was very successful. The Phase II project will be aimed at advancing the new scintillation technology for PET. Large samples of the new scintillators will be produced and their performance will be evaluated. PET modules based on the new scintillators will also be built and characterized

  10. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  11. Fixed-point image orthorectification algorithms for reduced computational cost

    Science.gov (United States)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation

  12. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  13. Computer Aided Design of a Low-Cost Painting Robot

    Directory of Open Access Journals (Sweden)

    SYEDA MARIA KHATOON ZAIDI

    2017-10-01

    Full Text Available The application of robots or robotic systems for painting parts is becoming increasingly conventional; to improve reliability, productivity, consistency and to decrease waste. However, in Pakistan only highend Industries are able to afford the luxury of a robotic system for various purposes. In this study we propose an economical Painting Robot that a small-scale industry can install in their plant with ease. The importance of this robot is that being cost effective, it can easily be replaced in small manufacturing industries and therefore, eliminate health problems occurring to the individual in charge of painting parts on an everyday basis. To achieve this aim, the robot is made with local parts with only few exceptions, to cut costs; and the programming language is kept at a mediocre level. Image processing is used to establish object recognition and it can be programmed to paint various simple geometries. The robot is placed on a conveyer belt to maximize productivity. A four DoF (Degree of Freedom arm increases the working envelope and accessibility of painting different shaped parts with ease. This robot is capable of painting up, front, back, left and right sides of the part with a single colour. Initially CAD (Computer Aided Design models of the robot were developed which were analyzed, modified and improved to withstand loading condition and perform its task efficiently. After design selection, appropriate motors and materials were selected and the robot was developed. Throughout the development phase, minor problems and errors were fixed accordingly as they arose. Lastly the robot was integrated with the computer and image processing for autonomous control. The final results demonstrated that the robot is economical and reduces paint wastage.

  14. Computer aided design of a low-cost painting robot

    International Nuclear Information System (INIS)

    Zaidi, S.M.; Janejo, F.; Mujtaba, S.B.

    2017-01-01

    The application of robots or robotic systems for painting parts is becoming increasingly conventional; to improve reliability, productivity, consistency and to decrease waste. However, in Pakistan only highend Industries are able to afford the luxury of a robotic system for various purposes. In this study we propose an economical Painting Robot that a small-scale industry can install in their plant with ease. The importance of this robot is that being cost effective, it can easily be replaced in small manufacturing industries and therefore, eliminate health problems occurring to the individual in charge of painting parts on an everyday basis. To achieve this aim, the robot is made with local parts with only few exceptions, to cut costs; and the programming language is kept at a mediocre level. Image processing is used to establish object recognition and it can be programmed to paint various simple geometries. The robot is placed on a conveyer belt to maximize productivity. A four DoF (Degree of Freedom) arm increases the working envelope and accessibility of painting different shaped parts with ease. This robot is capable of painting up, front, back, left and right sides of the part with a single colour. Initially CAD (Computer Aided Design) models of the robot were developed which were analyzed, modified and improved to withstand loading condition and perform its task efficiently. After design selection, appropriate motors and materials were selected and the robot was developed. Throughout the development phase, minor problems and errors were fixed accordingly as they arose. Lastly the robot was integrated with the computer and image processing for autonomous control. The final results demonstrated that the robot is economical and reduces paint wastage. (author)

  15. Highly integrated image sensors enable low-cost imaging systems

    Science.gov (United States)

    Gallagher, Paul K.; Lake, Don; Chalmers, David; Hurwitz, J. E. D.

    1997-09-01

    The highest barriers to wide scale implementation of vision systems have been cost. This is closely followed by the level of difficulty of putting a complete imaging system together. As anyone who has every been in the position of creating a vision system knows, the various bits and pieces supplied by the many vendors are not under any type of standardization control. In short, unless you are an expert in imaging, electrical interfacing, computers, digital signal processing, and high speed storage techniques, you will likely spend more money trying to do it yourself rather than to buy the exceedingly expensive systems available. Another alternative is making headway into the imaging market however. The growing investment in highly integrated CMOS based imagers is addressing both the cost and the system integration difficulties. This paper discusses the benefits gained from CMOS based imaging, and how these benefits are already being applied.

  16. Decommissioning costing approach based on the standardised list of costing items. Lessons learnt by the OMEGA computer code

    International Nuclear Information System (INIS)

    Daniska, Vladimir; Rehak, Ivan; Vasko, Marek; Ondra, Frantisek; Bezak, Peter; Pritrsky, Jozef; Zachar, Matej; Necas, Vladimir

    2011-01-01

    The document 'A Proposed Standardised List of Items for Costing Purposes' was issues in 1999 by OECD/NEA, IAEA and European Commission (EC) for promoting the harmonisation in decommissioning costing. It is a systematic list of decommissioning activities classified in chapters 01 to 11 with three numbered levels. Four cost group are defined for cost at each level. Document constitutes the standardised matrix of decommissioning activities and cost groups with definition of content of items. Knowing what is behind the items makes the comparison of cost for decommissioning projects transparent. Two approaches are identified for use of the standardised cost structure. First approach converts the cost data from existing specific cost structures into the standardised cost structure for the purpose of cost presentation. Second approach uses the standardised cost structure as the base for the cost calculation structure; the calculated cost data are formatted in the standardised cost format directly; several additional advantages may be identified in this approach. The paper presents the costing methodology based on the standardised cost structure and lessons learnt from last ten years of the implementation of the standardised cost structure as the cost calculation structure in the computer code OMEGA. Code include also on-line management of decommissioning waste, decay of radioactively, evaluation of exposure, generation and optimisation of the Gantt chart of a decommissioning project, which makes the OMEGA code an effective tool for planning and optimisation of decommissioning processes. (author)

  17. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  18. Manual of phosphoric acid fuel cell power plant cost model and computer program

    Science.gov (United States)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.

  19. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Development of a computer program for the cost analysis of spent fuel management

    International Nuclear Information System (INIS)

    Choi, Heui Joo; Lee, Jong Youl; Choi, Jong Won; Cha, Jeong Hun; Whang, Joo Ho

    2009-01-01

    So far, a substantial amount of spent fuels have been generated from the PWR and CANDU reactors. They are being temporarily stored at the nuclear power plant sites. It is expected that the temporary storage facility will be full of spent fuels by around 2016. The government plans to solve the problem by constructing an interim storage facility soon. The radioactive management act was enacted in 2008 to manage the spent fuels safety in Korea. According to the act, the radioactive waste management fund which will be used for the transportation, interim storage, and the final disposal of spent fuels has been established. The cost for the management of spent fuels is surprisingly high and could include a lot of uncertainty. KAERI and Kyunghee University have developed cost estimation tools to evaluate the cost for a spent fuel management based on an engineering design and calculation. It is not easy to develop a tool for a cost estimation under the situation that the national policy on a spent fuel management has not yet been fixed at all. Thus, the current version of the computer program is based on the current conceptual design of each management system. The main purpose of this paper is to introduce the computer program developed for the cost analysis of a spent fuel management. In order to show the application of the program, a spent fuel management scenario is prepared, and the cost for the scenario is estimated

  1. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  2. Low cost photomultiplier high-voltage readout system

    International Nuclear Information System (INIS)

    Oxoby, G.J.; Kunz, P.F.

    1976-10-01

    The Large Aperture Solenoid Spectrometer (LASS) at Stanford Linear Accelerator Center (SLAC) requires monitoring over 300 voltages. This data is recorded on magnetic tapes along with the event data. It must also be displayed so that operators can easily monitor and adjust the voltages. A low-cost high-voltage readout system has been implemented to offer stand-alone digital readout capability as well as fast data transfer to a host computer. The system is flexible enough to permit use of a DVM or ADC and commercially available analogue multiplexers

  3. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  4. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  5. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  6. Low-Cost High-Performance MRI

    Science.gov (United States)

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (standards for affordable (<$50,000) and robust portable devices.

  7. Cost effective distributed computing for Monte Carlo radiation dosimetry

    International Nuclear Information System (INIS)

    Wise, K.N.; Webb, D.V.

    2000-01-01

    Full text: An inexpensive computing facility has been established for performing repetitive Monte Carlo simulations with the BEAM and EGS4/EGSnrc codes of linear accelerator beams, for calculating effective dose from diagnostic imaging procedures and of ion chambers and phantoms used for the Australian high energy absorbed dose standards. The facility currently consists of 3 dual-processor 450 MHz processor PCs linked by a high speed LAN. The 3 PCs can be accessed either locally from a single keyboard/monitor/mouse combination using a SwitchView controller or remotely via a computer network from PCs with suitable communications software (e.g. Telnet, Kermit etc). All 3 PCs are identically configured to have the Red Hat Linux 6.0 operating system. A Fortran compiler and the BEAM and EGS4/EGSnrc codes are available on the 3 PCs. The preparation of sequences of jobs utilising the Monte Carlo codes is simplified using load-distributing software (enFuzion 6.0 marketed by TurboLinux Inc, formerly Cluster from Active Tools) which efficiently distributes the computing load amongst all 6 processors. We describe 3 applications of the system - (a) energy spectra from radiotherapy sources, (b) mean mass-energy absorption coefficients and stopping powers for absolute absorbed dose standards and (c) dosimetry for diagnostic procedures; (a) and (b) are based on the transport codes BEAM and FLURZnrc while (c) is a Fortran/EGS code developed at ARPANSA. Efficiency gains ranged from 3 for (c) to close to the theoretical maximum of 6 for (a) and (b), with the gain depending on the amount of 'bookkeeping' to begin each task and the time taken to complete a single task. We have found the use of a load-balancing batch processing system with many PCs to be an economical way of achieving greater productivity for Monte Carlo calculations or of any computer intensive task requiring many runs with different parameters. Copyright (2000) Australasian College of Physical Scientists and

  8. A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Science.gov (United States)

    Guerrero, Ginés D.; Imbernón, Baldomero; García, José M.

    2014-01-01

    Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC) platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs) has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO). This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor. PMID:25025055

  9. A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Directory of Open Access Journals (Sweden)

    Ginés D. Guerrero

    2014-01-01

    Full Text Available Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO. This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor.

  10. Development of computer software for pavement life cycle cost analysis.

    Science.gov (United States)

    1988-01-01

    The life cycle cost analysis program (LCCA) is designed to automate and standardize life cycle costing in Virginia. It allows the user to input information necessary for the analysis, and it then completes the calculations and produces a printed copy...

  11. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  12. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  13. Computer programs for capital cost estimation, lifetime economic performance simulation, and computation of cost indexes for laser fusion and other advanced technology facilities

    International Nuclear Information System (INIS)

    Pendergrass, J.H.

    1978-01-01

    Three FORTRAN programs, CAPITAL, VENTURE, and INDEXER, have been developed to automate computations used in assessing the economic viability of proposed or conceptual laser fusion and other advanced-technology facilities, as well as conventional projects. The types of calculations performed by these programs are, respectively, capital cost estimation, lifetime economic performance simulation, and computation of cost indexes. The codes permit these three topics to be addressed with considerable sophistication commensurate with user requirements and available data

  14. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  15. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  16. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  17. Low Cost, High Efficiency, High Pressure Hydrogen Storage

    Energy Technology Data Exchange (ETDEWEB)

    Mark Leavitt

    2010-03-31

    A technical and design evaluation was carried out to meet DOE hydrogen fuel targets for 2010. These targets consisted of a system gravimetric capacity of 2.0 kWh/kg, a system volumetric capacity of 1.5 kWh/L and a system cost of $4/kWh. In compressed hydrogen storage systems, the vast majority of the weight and volume is associated with the hydrogen storage tank. In order to meet gravimetric targets for compressed hydrogen tanks, 10,000 psi carbon resin composites were used to provide the high strength required as well as low weight. For the 10,000 psi tanks, carbon fiber is the largest portion of their cost. Quantum Technologies is a tier one hydrogen system supplier for automotive companies around the world. Over the course of the program Quantum focused on development of technology to allow the compressed hydrogen storage tank to meet DOE goals. At the start of the program in 2004 Quantum was supplying systems with a specific energy of 1.1-1.6 kWh/kg, a volumetric capacity of 1.3 kWh/L and a cost of $73/kWh. Based on the inequities between DOE targets and Quantum’s then current capabilities, focus was placed first on cost reduction and second on weight reduction. Both of these were to be accomplished without reduction of the fuel system’s performance or reliability. Three distinct areas were investigated; optimization of composite structures, development of “smart tanks” that could monitor health of tank thus allowing for lower design safety factor, and the development of “Cool Fuel” technology to allow higher density gas to be stored, thus allowing smaller/lower pressure tanks that would hold the required fuel supply. The second phase of the project deals with three additional distinct tasks focusing on composite structure optimization, liner optimization, and metal.

  18. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  19. Are PES connection costs too high?

    International Nuclear Information System (INIS)

    Scott, N.

    1998-01-01

    Windfarm developers often have good reason to question the costs they are quoted by their local distribution company for connection to the system, and these costs can now be challenged under the 'Competition in Connection' initiative. Econnect Ltd specialise in electrical connections for renewable generation throughout the UK and Europe, and have worked on many projects where alternative connections have been designed at more competitive prices. This paper provides some examples which illustrate the importance of acquiring a thorough understanding of all power system issues and PES concerns if the most cost-effective connection is to be realised. (Author)

  20. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  1. Cost-effectiveness of implementing computed tomography screening for lung cancer in Taiwan.

    Science.gov (United States)

    Yang, Szu-Chun; Lai, Wu-Wei; Lin, Chien-Chung; Su, Wu-Chou; Ku, Li-Jung; Hwang, Jing-Shiang; Wang, Jung-Der

    2017-06-01

    A screening program for lung cancer requires more empirical evidence. Based on the experience of the National Lung Screening Trial (NLST), we developed a method to adjust lead-time bias and quality-of-life changes for estimating the cost-effectiveness of implementing computed tomography (CT) screening in Taiwan. The target population was high-risk (≥30 pack-years) smokers between 55 and 75 years of age. From a nation-wide, 13-year follow-up cohort, we estimated quality-adjusted life expectancy (QALE), loss-of-QALE, and lifetime healthcare expenditures per case of lung cancer stratified by pathology and stage. Cumulative stage distributions for CT-screening and no-screening were assumed equal to those for CT-screening and radiography-screening in the NLST to estimate the savings of loss-of-QALE and additional costs of lifetime healthcare expenditures after CT screening. Costs attributable to screen-negative subjects, false-positive cases and radiation-induced lung cancer were included to obtain the incremental cost-effectiveness ratio from the public payer's perspective. The incremental costs were US$22,755 per person. After dividing this by savings of loss-of-QALE (1.16 quality-adjusted life year (QALY)), the incremental cost-effectiveness ratio was US$19,683 per QALY. This ratio would fall to US$10,947 per QALY if the stage distribution for CT-screening was the same as that of screen-detected cancers in the NELSON trial. Low-dose CT screening for lung cancer among high-risk smokers would be cost-effective in Taiwan. As only about 5% of our women are smokers, future research is necessary to identify the high-risk groups among non-smokers and increase the coverage. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  2. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  3. High cost of nuclear power plants

    International Nuclear Information System (INIS)

    Bassett, C.

    1978-01-01

    Retroactive safety standards were found to account for over half the costs of a nuclear power plant and point up the need for an effective cost-benefit analysis of changes made by the Nuclear Regulatory Commission after construction has started. The author compared the Davis-Besse Unit No. 1 construction-cost estimates with the final-cost increases during a rate-case investigation in Ohio. He presents data furnished for ten of the largest construction contracts to illustrate the cost increases involving fixed hardware and intensive labor. The situation was found to repeat with other utilities across the country even though safeguards against irresponsible low bidding were introduced. Low bidding was found to continue, encouraged by the need for retrofitting to meet regulation changes. The average cost per kilowatt of major light-water reactors is shown to have increased from $171 in 1970 to $555 in 1977, while construction duration increased from 43.4 to 95.6 months during the same period

  4. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  5. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  6. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  7. High and rising health care costs.

    Science.gov (United States)

    Ginsburg, Paul B

    2008-10-01

    The U.S. is spending a growing share of the GDP on health care, outpacing other industrialized countries. This synthesis examines why costs are higher in the U.S. and what is driving their growth. Key findings include: health care inefficiency, medical technology and health status (particularly obesity) are the primary drivers of rising U.S. health care costs. Health payer systems that reward inefficiencies and preempt competition have impeded productivity gains in the health care sector. The best evidence indicates medical technology accounts for one-half to two-thirds of spending growth. While medical malpractice insurance and defensive medicine contribute to health costs, they are not large enough factors to significantly contribute to a rise in spending. Research is consistent that demographics will not be a significant factor in driving spending despite the aging baby boomers.

  8. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  9. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  10. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  11. High cost of stage IV pressure ulcers.

    Science.gov (United States)

    Brem, Harold; Maggi, Jason; Nierman, David; Rolnitzky, Linda; Bell, David; Rennert, Robert; Golinko, Michael; Yan, Alan; Lyder, Courtney; Vladeck, Bruce

    2010-10-01

    The aim of this study was to calculate and analyze the cost of treatment for stage IV pressure ulcers. A retrospective chart analysis of patients with stage IV pressure ulcers was conducted. Hospital records and treatment outcomes of these patients were followed up for a maximum of 29 months and analyzed. Costs directly related to the treatment of pressure ulcers and their associated complications were calculated. Nineteen patients with stage IV pressure ulcers (11 hospital-acquired and 8 community-acquired) were identified and their charts were reviewed. The average hospital treatment cost associated with stage IV pressure ulcers and related complications was $129,248 for hospital-acquired ulcers during 1 admission, and $124,327 for community-acquired ulcers over an average of 4 admissions. The costs incurred from stage IV pressure ulcers are much greater than previously estimated. Halting the progression of early stage pressure ulcers has the potential to eradicate enormous pain and suffering, save thousands of lives, and reduce health care expenditures by millions of dollars. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. The High Cost of Saving Energy Dollars.

    Science.gov (United States)

    Rose, Patricia

    1985-01-01

    In alternative financing a private company provides the capital and expertise for improving school energy efficiency. Savings are split between the school system and the company. Options for municipal leasing, cost sharing, and shared savings are explained along with financial, procedural, and legal considerations. (MLF)

  13. Low-Budget Computer Programming in Your School (An Alternative to the Cost of Large Computers). Illinois Series on Educational Applications of Computers. No. 14.

    Science.gov (United States)

    Dennis, J. Richard; Thomson, David

    This paper is concerned with a low cost alternative for providing computer experience to secondary school students. The brief discussion covers the programmable calculator and its relevance for teaching the concepts and the rudiments of computer programming and for computer problem solving. A list of twenty-five programming activities related to…

  14. DECOST: computer routine for decommissioning cost and funding analysis

    International Nuclear Information System (INIS)

    Mingst, B.C.

    1979-12-01

    One of the major controversies surrounding the decommissioning of nuclear facilities is the lack of financial information on just what the eventual costs will be. The Nuclear Regulatory Commission has studies underway to analyze the costs of decommissioning of nuclear fuel cycle facilities and some other similar studies have also been done by other groups. These studies all deal only with the final cost outlays needed to finance decommissioning in an unchangeable set of circumstances. Funding methods and planning to reduce the costs and financial risks are usually not attempted. The DECOST program package is intended to fill this void and allow wide-ranging study of the various options available when planning for the decommissioning of nuclear facilities

  15. CHEP95: Computing in high energy physics. Abstracts

    International Nuclear Information System (INIS)

    1995-01-01

    These proceedings cover the technical papers on computation in High Energy Physics, including computer codes, computer devices, control systems, simulations, data acquisition systems. New approaches on computer architectures are also discussed

  16. High-Precision Computation and Mathematical Physics

    International Nuclear Information System (INIS)

    Bailey, David H.; Borwein, Jonathan M.

    2008-01-01

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  17. Cost-Effectiveness of Computed Tomographic Colonography: A Prospective Comparison with Colonoscopy

    International Nuclear Information System (INIS)

    Arnesen, R.B.; Ginnerup-Pedersen, B.; Poulsen, P.B.; Benzon, K. von; Adamsen, S.; Laurberg, S.; Hart-Hansen, O.

    2007-01-01

    Purpose: To estimate the cost-effectiveness of detecting colorectal polyps with computed tomographic colonography (CTC) and subsequent polypectomy with primary colonoscopy (CC), using CC as the alternative strategy. Material and Methods: A marginal analysis was performed regarding 103 patients who had had CTC prior to same-day CC at two hospitals, H-I (n 53) and H-II (n = 50). The patients were randomly chosen from surveillance and symptomatic study populations (148 at H-I and 231 at H-II). Populations, organizations, and procedures were compared. Cost data on time consumption, medication, and minor equipment were collected prospectively, while data on salaries and major equipment were collected retrospectively. The effect was the (previously published) sensitivities of CTC and CC for detection of colorectal polyps ≥6 mm (H-I, n = 148) or ≥5 mm (H-II, n = 231). Results: Thirteen patients at each center had at least one colorectal polyp ≥6 mm or ≥5 mm. CTC was the cost-effective alternative at H-I (Euro 187 vs. Euro 211), while CC was the cost-effective alternative at H-II (Euro 239 vs. Euro 192). The cost-effectiveness (costs per finding) mainly depended on the sensitivity of CTC and CC, but the depreciation of equipment and the staff's use of time were highly influential as well. Conclusion: Detection of colorectal polyps ≥6 mm or ≥5 mm with CTC, followed by polypectomy by CC, can be performed cost-effectively at some institutions with the appropriate hardware and organization keywords

  18. Dimensioning storage and computing clusters for efficient high throughput computing

    International Nuclear Information System (INIS)

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  19. High Available COTS Based Computer for Space

    Science.gov (United States)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  20. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  1. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  2. Comparison of different strategies in prenatal screening for Down's syndrome: cost effectiveness analysis of computer simulation.

    Science.gov (United States)

    Gekas, Jean; Gagné, Geneviève; Bujold, Emmanuel; Douillard, Daniel; Forest, Jean-Claude; Reinharz, Daniel; Rousseau, François

    2009-02-13

    To assess and compare the cost effectiveness of three different strategies for prenatal screening for Down's syndrome (integrated test, sequential screening, and contingent screenings) and to determine the most useful cut-off values for risk. Computer simulations to study integrated, sequential, and contingent screening strategies with various cut-offs leading to 19 potential screening algorithms. The computer simulation was populated with data from the Serum Urine and Ultrasound Screening Study (SURUSS), real unit costs for healthcare interventions, and a population of 110 948 pregnancies from the province of Québec for the year 2001. Cost effectiveness ratios, incremental cost effectiveness ratios, and screening options' outcomes. The contingent screening strategy dominated all other screening options: it had the best cost effectiveness ratio ($C26,833 per case of Down's syndrome) with fewer procedure related euploid miscarriages and unnecessary terminations (respectively, 6 and 16 per 100,000 pregnancies). It also outperformed serum screening at the second trimester. In terms of the incremental cost effectiveness ratio, contingent screening was still dominant: compared with screening based on maternal age alone, the savings were $C30,963 per additional birth with Down's syndrome averted. Contingent screening was the only screening strategy that offered early reassurance to the majority of women (77.81%) in first trimester and minimised costs by limiting retesting during the second trimester (21.05%). For the contingent and sequential screening strategies, the choice of cut-off value for risk in the first trimester test significantly affected the cost effectiveness ratios (respectively, from $C26,833 to $C37,260 and from $C35,215 to $C45,314 per case of Down's syndrome), the number of procedure related euploid miscarriages (from 6 to 46 and from 6 to 45 per 100,000 pregnancies), and the number of unnecessary terminations (from 16 to 26 and from 16 to 25 per 100

  3. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  4. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  5. High performance computing network for cloud environment using simulators

    OpenAIRE

    Singh, N. Ajith; Hemalatha, M.

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional...

  6. Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.

    Science.gov (United States)

    Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P

    2010-12-22

    Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.

  7. Factors cost effectively improved using computer simulations of ...

    African Journals Online (AJOL)

    LPhidza

    effectively managed using computer simulations in semi-arid conditions pertinent to much of sub-Saharan Africa. ... small scale farmers to obtain optimal crop yields thus ensuring their food security and livelihood is ... those that simultaneously incorporate and simulate processes involved throughout the course of crop ...

  8. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  9. A Comprehensive and Cost-Effective Computer Infrastructure for K-12 Schools

    Science.gov (United States)

    Warren, G. P.; Seaton, J. M.

    1996-01-01

    Since 1993, NASA Langley Research Center has been developing and implementing a low-cost Internet connection model, including system architecture, training, and support, to provide Internet access for an entire network of computers. This infrastructure allows local area networks which exceed 50 machines per school to independently access the complete functionality of the Internet by connecting to a central site, using state-of-the-art commercial modem technology, through a single standard telephone line. By locating high-cost resources at this central site and sharing these resources and their costs among the school districts throughout a region, a practical, efficient, and affordable infrastructure for providing scale-able Internet connectivity has been developed. As the demand for faster Internet access grows, the model has a simple expansion path that eliminates the need to replace major system components and re-train personnel. Observations of optical Internet usage within an environment, particularly school classrooms, have shown that after an initial period of 'surfing,' the Internet traffic becomes repetitive. By automatically storing requested Internet information on a high-capacity networked disk drive at the local site (network based disk caching), then updating this information only when it changes, well over 80 percent of the Internet traffic that leaves a location can be eliminated by retrieving the information from the local disk cache.

  10. Is computer aided detection (CAD) cost effective in screening mammography? A model based on the CADET II study

    Science.gov (United States)

    2011-01-01

    Background Single reading with computer aided detection (CAD) is an alternative to double reading for detecting cancer in screening mammograms. The aim of this study is to investigate whether the use of a single reader with CAD is more cost-effective than double reading. Methods Based on data from the CADET II study, the cost-effectiveness of single reading with CAD versus double reading was measured in terms of cost per cancer detected. Cost (Pound (£), year 2007/08) of single reading with CAD versus double reading was estimated assuming a health and social service perspective and a 7 year time horizon. As the equipment cost varies according to the unit size a separate analysis was conducted for high, average and low volume screening units. One-way sensitivity analyses were performed by varying the reading time, equipment and assessment cost, recall rate and reader qualification. Results CAD is cost increasing for all sizes of screening unit. The introduction of CAD is cost-increasing compared to double reading because the cost of CAD equipment, staff training and the higher assessment cost associated with CAD are greater than the saving in reading costs. The introduction of single reading with CAD, in place of double reading, would produce an additional cost of £227 and £253 per 1,000 women screened in high and average volume units respectively. In low volume screening units, the high cost of purchasing the equipment will results in an additional cost of £590 per 1,000 women screened. One-way sensitivity analysis showed that the factors having the greatest effect on the cost-effectiveness of CAD with single reading compared with double reading were the reading time and the reader's professional qualification (radiologist versus advanced practitioner). Conclusions Without improvements in CAD effectiveness (e.g. a decrease in the recall rate) CAD is unlikely to be a cost effective alternative to double reading for mammography screening in UK. This study

  11. Adaptive Radar Signal Processing-The Problem of Exponential Computational Cost

    National Research Council Canada - National Science Library

    Rangaswamy, Muralidhar

    2003-01-01

    .... Extensions to handle the case of non-Gaussian clutter statistics are presented. Current challenges of limited training data support, computational cost, and severely heterogeneous clutter backgrounds are outlined...

  12. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej; Paszyński, Maciej R.; Pardo, D.; Dalcin, Lisandro; Calo, Victor M.

    2015-01-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution

  13. Computer code for the costing and sizing of TNS tokamaks

    International Nuclear Information System (INIS)

    Sink, D.A.; Iwinski, E.M.

    1977-01-01

    A FORTRAN code for the COsting And Sizing of Tokamaks (COAST) is described. The code was written to conduct detailed analyses on the engineering features of the next tokamak fusion device following TFTR. The ORNL/Westinghouse study of TNS (The Next Step) has involved the investigation of a number of device options, each over a wide range of plasma sizes. A generalized description of TNS is incorporated in the code and includes refined modeling of over forty systems and subsystems. Considerable detailed design and analyses have provided the basis for the thermal, electrical, mechanical, nuclear, chemical, vacuum, and facility engineering of the various subsystems. Currently, the code provides a tool for the systematic comparison of four toroidal field (TF) coil technologies allowing both D-shaped and circular coils. The coil technologies are: (1) copper (both room temperature and liquid-nitrogen cooled), (2) superconducting NbTi, (3) superconducting Nb 3 Sn, and (4) a Cu/NbTi/ hybrid. For the poloidal field (PF) coil systems copper conductors are assumed. The ohmic heating (OH) coils are located within the machine bore and have an air core, while the shaping field (SF) coils are located either within or outside the TF coils. The PF coil self and mutual inductances are calculated from the geometry, and the PF coil power supplies are modeled to account for time-dependent profiles for voltages and currents as governed by input data. Plasma heating is assumed to be by neutral beams, and impurity control is either passive or by a poloidal divertor system. The size modeling allows considerable freedom in specifying physics assumptions, operating scenarios, TF operating margin, and component geometric and performance parameters. Cost relationships have been developed for both plant and capital equipment and for annual utility and fuel expenses. The code has been used successfully to reproduce the sizing and costing of TFTR in order to calibrate the various models

  14. The path toward HEP High Performance Computing

    International Nuclear Information System (INIS)

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  15. A computational study of high entropy alloys

    Science.gov (United States)

    Wang, Yang; Gao, Michael; Widom, Michael; Hawk, Jeff

    2013-03-01

    As a new class of advanced materials, high-entropy alloys (HEAs) exhibit a wide variety of excellent materials properties, including high strength, reasonable ductility with appreciable work-hardening, corrosion and oxidation resistance, wear resistance, and outstanding diffusion-barrier performance, especially at elevated and high temperatures. In this talk, we will explain our computational approach to the study of HEAs that employs the Korringa-Kohn-Rostoker coherent potential approximation (KKR-CPA) method. The KKR-CPA method uses Green's function technique within the framework of multiple scattering theory and is uniquely designed for the theoretical investigation of random alloys from the first principles. The application of the KKR-CPA method will be discussed as it pertains to the study of structural and mechanical properties of HEAs. In particular, computational results will be presented for AlxCoCrCuFeNi (x = 0, 0.3, 0.5, 0.8, 1.0, 1.3, 2.0, 2.8, and 3.0), and these results will be compared with experimental information from the literature.

  16. Computer simulation of high energy displacement cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1990-01-01

    A methodology developed for modeling many aspects of high energy displacement cascades with molecular level computer simulations is reviewed. The initial damage state is modeled in the binary collision approximation (using the MARLOWE computer code), and the subsequent disposition of the defects within a cascade is modeled with a Monte Carlo annealing simulation (the ALSOME code). There are few adjustable parameters, and none are set to physically unreasonable values. The basic configurations of the simulated high energy cascades in copper, i.e., the number, size and shape of damage regions, compare well with observations, as do the measured numbers of residual defects and the fractions of freely migrating defects. The success of these simulations is somewhat remarkable, given the relatively simple models of defects and their interactions that are employed. The reason for this success is that the behavior of the defects is very strongly influenced by their initial spatial distributions, which the binary collision approximation adequately models. The MARLOWE/ALSOME system, with input from molecular dynamics and experiments, provides a framework for investigating the influence of high energy cascades on microstructure evolution. (author)

  17. Computing Cost Price for Cataract Surgery by Activity Based Costing (ABC Method at Hazrat-E-Zahra Hospital, Isfahan University of Medical Sciences, 2014

    Directory of Open Access Journals (Sweden)

    Masuod Ferdosi

    2016-10-01

    Full Text Available Background: Hospital managers need to have accurate information about actual costs to make efficient and effective decisions. In activity based costing method, first, activities are recognized and then direct and indirect costs are computed based on allocation methods. The aim of this study was to compute the cost price for cataract surgery by Activity Based Costing (ABC method at Hazrat-e-Zahra Hospital, Isfahan University of Medical Sciences. Methods: This was a cross- sectional study for computing the costs of cataract surgery by activity based costing technique in Hazrat-e-Zahra Hospital in Isfahan University of Medical Sciences, 2014. Data were collected through interview and direct observation and analyzed by Excel software. Results: According to the results of this study, total cost in cataract surgery was 8,368,978 Rials. Personnel cost included 62.2% (5,213,574 Rials of total cost of cataract surgery that is the highest share of surgery costs. The cost of consumables was 7.57% (1,992,852 Rials of surgery costs. Conclusion: Based on the results, there was different between cost price of the services and public Tariff which appears as hazards or financial crises to the hospital. Therefore, it is recommended to use the right methods to compute the costs relating to Activity Based Costing. Cost price of cataract surgery can be reduced by strategies such as decreasing the cost of consumables.

  18. High-resolution computer-aided moire

    Science.gov (United States)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  19. Costs incurred by applying computer-aided design/computer-aided manufacturing techniques for the reconstruction of maxillofacial defects.

    Science.gov (United States)

    Rustemeyer, Jan; Melenberg, Alex; Sari-Rieger, Aynur

    2014-12-01

    This study aims to evaluate the additional costs incurred by using a computer-aided design/computer-aided manufacturing (CAD/CAM) technique for reconstructing maxillofacial defects by analyzing typical cases. The medical charts of 11 consecutive patients who were subjected to the CAD/CAM technique were considered, and invoices from the companies providing the CAD/CAM devices were reviewed for every case. The number of devices used was significantly correlated with cost (r = 0.880; p costs were found between cases in which prebent reconstruction plates were used (€3346.00 ± €29.00) and cases in which they were not (€2534.22 ± €264.48; p costs of two, three and four devices, even when ignoring the cost of reconstruction plates. Additional fees provided by statutory health insurance covered a mean of 171.5% ± 25.6% of the cost of the CAD/CAM devices. Since the additional fees provide financial compensation, we believe that the CAD/CAM technique is suited for wide application and not restricted to complex cases. Where additional fees/funds are not available, the CAD/CAM technique might be unprofitable, so the decision whether or not to use it remains a case-to-case decision with respect to cost versus benefit. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  20. Does Not Compute: The High Cost of Low Technology Skills in the U.S.--and What We Can Do about It. Vital Signs: Reports on the Condition of STEM Learning in the U.S.

    Science.gov (United States)

    Change the Equation, 2015

    2015-01-01

    Although American millennials are the first generation of "digital natives"--that is, people who grew up with computers and the internet--they are not very tech savvy. Using technology for social networking, surfing the web, or taking selfies is a far cry from using it to solve complex problems at work or at home. Truly tech savvy people…

  1. hPIN/hTAN: Low-Cost e-Banking Secure against Untrusted Computers

    Science.gov (United States)

    Li, Shujun; Sadeghi, Ahmad-Reza; Schmitz, Roland

    We propose hPIN/hTAN, a low-cost token-based e-banking protection scheme when the adversary has full control over the user's computer. Compared with existing hardware-based solutions, hPIN/hTAN depends on neither second trusted channel, nor secure keypad, nor computationally expensive encryption module.

  2. Modelling the Intention to Adopt Cloud Computing Services: A Transaction Cost Theory Perspective

    Directory of Open Access Journals (Sweden)

    Ogan Yigitbasioglu

    2014-11-01

    Full Text Available This paper uses transaction cost theory to study cloud computing adoption. A model is developed and tested with data from an Australian survey. According to the results, perceived vendor opportunism and perceived legislative uncertainty around cloud computing were significantly associated with perceived cloud computing security risk. There was also a significant negative relationship between perceived cloud computing security risk and the intention to adopt cloud services. This study also reports on adoption rates of cloud computing in terms of applications, as well as the types of services used.

  3. Estimating boiling water reactor decommissioning costs. A user's manual for the BWR Cost Estimating Computer Program (CECP) software: Draft report for comment

    International Nuclear Information System (INIS)

    Bierschbach, M.C.

    1994-12-01

    With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the U.S. Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user's manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personal computer, provides estimates for the cost of decommissioning BWR power stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning

  4. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  5. Parents and the High Cost of Child Care: 2012 Report

    Science.gov (United States)

    Child Care Aware of America, 2012

    2012-01-01

    "Parents and the High Cost of Child Care: 2012 Report" presents 2011 data reflecting what parents pay for full-time child care in America. It includes average fees for both child care centers and family child care homes. Information was collected through a survey conducted in January 2012 that asked for the average costs charged for…

  6. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej

    2015-02-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver are of order O(log(N)p2) for the one dimensional (1D) case, O(Np2) for the two dimensional (2D) case, and O(N4/3p2) for the three dimensional (3D) case, where N is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of p and N. For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about 20% for 256 processors for a 3D example with 1283 unknowns and linear B-splines with C0 global continuity, and 15% for a 3D example with 643 unknowns and quartic B-splines with C3 global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.

  7. Cost optimization of load carrying thin-walled precast high performance concrete sandwich panels

    DEFF Research Database (Denmark)

    Hodicky, Kamil; Hansen, Sanne; Hulin, Thomas

    2015-01-01

    and HPCSP’s geometrical parameters as well as on material cost function in the HPCSP design. Cost functions are presented for High Performance Concrete (HPC), insulation layer, reinforcement and include labour-related costs. The present study reports the economic data corresponding to specific manufacturing......The paper describes a procedure to find the structurally and thermally efficient design of load-carrying thin-walled precast High Performance Concrete Sandwich Panels (HPCSP) with an optimal economical solution. A systematic optimization approach is based on the selection of material’s performances....... The solution of the optimization problem is performed in the computer package software Matlab® with SQPlab package and integrates the processes of HPCSP design, quantity take-off and cost estimation. The proposed optimization process outcomes in complex HPCSP design proposals to achieve minimum cost of HPCSP....

  8. Computer code validation by high temperature chemistry

    International Nuclear Information System (INIS)

    Alexander, C.A.; Ogden, J.S.

    1988-01-01

    At least five of the computer codes utilized in analysis of severe fuel damage-type events are directly dependent upon or can be verified by high temperature chemistry. These codes are ORIGEN, CORSOR, CORCON, VICTORIA, and VANESA. With the exemption of CORCON and VANESA, it is necessary that verification experiments be performed on real irradiated fuel. For ORIGEN, the familiar knudsen effusion cell is the best choice and a small piece of known mass and known burn-up is selected and volatilized completely into the mass spectrometer. The mass spectrometer is used in the integral mode to integrate the entire signal from preselected radionuclides, and from this integrated signal the total mass of the respective nuclides can be determined. For CORSOR and VICTORIA, experiments with flowing high pressure hydrogen/steam must flow over the irradiated fuel and then enter the mass spectrometer. For these experiments, a high pressure-high temperature molecular beam inlet must be employed. Finally, in support of VANESA-CORCON, the very highest temperature and molten fuels must be contained and analyzed. Results from all types of experiments will be discussed and their applicability to present and future code development will also be covered

  9. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  10. Resource utilization and costs during the initial years of lung cancer screening with computed tomography in Canada.

    Science.gov (United States)

    Cressman, Sonya; Lam, Stephen; Tammemagi, Martin C; Evans, William K; Leighl, Natasha B; Regier, Dean A; Bolbocean, Corneliu; Shepherd, Frances A; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R; Mayo, John R; McWilliams, Annette; Couture, Christian; English, John C; Goffin, John; Hwang, David M; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J; Goss, Glenwood D; Nicholas, Garth; Seely, Jean M; Sekhon, Harmanjatinder S; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D; Tan, Wan C; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J

    2014-10-01

    It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer's perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400-$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553-$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254-$52,200; p = 0.061). In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure.

  11. Computer simulations of high pressure systems

    International Nuclear Information System (INIS)

    Wilkins, M.L.

    1977-01-01

    Numerical methods are capable of solving very difficult problems in solid mechanics and gas dynamics. In the design of engineering structures, critical decisions are possible if the behavior of materials is correctly described in the calculation. Problems of current interest require accurate analysis of stress-strain fields that range from very small elastic displacement to very large plastic deformation. A finite difference program is described that solves problems over this range and in two and three space-dimensions and time. A series of experiments and calculations serve to establish confidence in the plasticity formulation. The program can be used to design high pressure systems where plastic flow occurs. The purpose is to identify material properties, strength and elongation, that meet the operating requirements. An objective is to be able to perform destructive testing on a computer rather than on the engineering structure. Examples of topical interest are given

  12. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  13. A precise goniometer/tensiometer using a low cost single-board computer

    Science.gov (United States)

    Favier, Benoit; Chamakos, Nikolaos T.; Papathanasiou, Athanasios G.

    2017-12-01

    Measuring the surface tension and the Young contact angle of a droplet is extremely important for many industrial applications. Here, considering the booming interest for small and cheap but precise experimental instruments, we have constructed a low-cost contact angle goniometer/tensiometer, based on a single-board computer (Raspberry Pi). The device runs an axisymmetric drop shape analysis (ADSA) algorithm written in Python. The code, here named DropToolKit, was developed in-house. We initially present the mathematical framework of our algorithm and then we validate our software tool against other well-established ADSA packages, including the commercial ramé-hart DROPimage Advanced as well as the DropAnalysis plugin in ImageJ. After successfully testing for various combinations of liquids and solid surfaces, we concluded that our prototype device would be highly beneficial for industrial applications as well as for scientific research in wetting phenomena compared to the commercial solutions.

  14. Costs and role of ultrasound follow-up of polytrauma patients after initial computed tomography

    International Nuclear Information System (INIS)

    Maurer, M.H.; Winkler, A.; Powerski, M.J.; Elgeti, F.; Huppertz, A.; Roettgen, R.; Marnitz, T.; Wichlas, F.

    2012-01-01

    Purpose: To assess the costs and diagnostic gain of abdominal ultrasound follow-up of polytrauma patients initially examined by whole-body computed tomography (CT). Materials and Methods: A total of 176 patients with suspected multiple trauma (126 men, 50 women; age 43.5 ± 17.4 years) were retrospectively analyzed with regard to supplementary and new findings obtained by ultrasound follow-up compared with the results of exploratory FAST (focused assessment with sonography for trauma) at admission and the findings of whole-body CT. A process model was used to document the staff, materials, and total costs of the ultrasound follow-up examinations. Results: FAST yielded 26 abdominal findings (organ injury and/or free intra-abdominal fluid) in 19 patients, while the abdominal scan of whole-body CT revealed 32 findings in 25 patients. FAST had 81 % sensitivity and 100 % specificity. Follow-up ultrasound examinations revealed new findings in 2 of the 25 patients with abdominal injuries detected with initial CT. In the 151 patients without abdominal injuries in the initial CT scan, ultrasound follow-up did not yield any supplementary or new findings. The total costs of an ultrasound follow-up examination were EUR 28.93. The total costs of all follow-up ultrasound examinations performed in the study population were EUR 5658.23. Conclusion: Follow-up abdominal ultrasound yields only a low overall diagnostic gain in polytrauma patients in whom initial CT fails to detect any abdominal injuries but incurs high personnel expenses for radiological departments. (orig.)

  15. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  16. Plant process computer replacements - techniques to limit installation schedules and costs

    International Nuclear Information System (INIS)

    Baker, M.D.; Olson, J.L.

    1992-01-01

    Plant process computer systems, a standard fixture in all nuclear power plants, are used to monitor and display important plant process parameters. Scanning thousands of field sensors and alarming out-of-limit values, these computer systems are heavily relied on by control room operators. The original nuclear steam supply system (NSSS) vendor for the power plant often supplied the plant process computer. Designed using sixties and seventies technology, a plant's original process computer has been obsolete for some time. Driven by increased maintenance costs and new US Nuclear Regulatory Commission regulations such as NUREG-0737, Suppl. 1, many utilities have replaced their process computers with more modern computer systems. Given that computer systems are by their nature prone to rapid obsolescence, this replacement cycle will likely repeat. A process computer replacement project can be a significant capital expenditure and must be performed during a scheduled refueling outage. The object of the installation process is to install a working system on schedule. Experience gained by supervising several computer replacement installations has taught lessons that, if applied, will shorten the schedule and limit the risk of costly delays. Examples illustrating this technique are given. This paper and these examples deal only with the installation process and assume that the replacement computer system has been adequately designed, and development and factory tested

  17. WHAT DRIVES HIGH COST OF FINANCE IN MOLDOVA?

    Directory of Open Access Journals (Sweden)

    Alexandru Stratan

    2012-03-01

    Full Text Available Why there are high costs to finance in Republic of Moldova? Is it a problem for business environment?These are the questions discussed in this paper. Following the well know Growth Diagnostics approach byHausmann, Rodrik and Velasco, authors assess the barriers and impediments to access to finance in Republic ofMoldova. Guided by international and national statistics we found evidence of poor intermediation, poorinstitutions, high level of inflation, and high collateral as major causes of high cost of financial resources inRepublic of Moldova. At the end of the study authors give policy recommendations identifying other related fieldsto be addressed.

  18. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  19. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  20. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  1. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  2. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    International Nuclear Information System (INIS)

    Bach, Matthias

    2014-01-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  3. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  4. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  5. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  6. Comparative cost analysis -- computed tomography vs. alternative diagnostic procedures, 1977-1980

    International Nuclear Information System (INIS)

    Gempel, P.A.; Harris, G.H.; Evans, R.G.

    1977-12-01

    In comparing the total national cost of utilizing computed tomography (CT) for medically indicated diagnoses with that of conventional x-ray, ultrasonography, nuclear medicine, and exploratory surgery, this investigation concludes that there was little, if any, added net cost from CT use in 1977 or will there be in 1980. Computed tomography, generally recognized as a reliable and useful diagnostic modality, has the potential to reduce net costs provided that an optimal number of units can be made available to physicians and patients to achieve projected reductions in alternative procedures. This study examines the actual cost impact of CT on both cranial and body diagnostic procedures. For abdominal and mediastinal disorders, CT scanning is just beginning to emerge as a diagnostic modality. As such, clinical experience is somewhat limited and the authors assume that no significant reduction in conventional procedures took place in 1977. It is estimated that the approximately 375,000 CT body procedures performed in 1977 represent only a 5 percent cost increase over use of other diagnostic modalities. It is projected that 2,400,000 CT body procedures will be performed in 1980 and, depending on assumptions used, total body diagnostic costs will increase only slightly or be reduced. Thirty-one tables appear throughout the text presenting cost data broken down by types of diagnostic procedures used and projections by years. Appendixes present technical cost components for diagnostic procedures, the comparative efficacy of CT as revealed in abstracts of published literature, selected medical diagnoses, and references

  7. Costs of cloud computing for a biometry department. A case study.

    Science.gov (United States)

    Knaus, J; Hieke, S; Binder, H; Schwarzer, G

    2013-01-01

    "Cloud" computing providers, such as the Amazon Web Services (AWS), offer stable and scalable computational resources based on hardware virtualization, with short, usually hourly, billing periods. The idea of pay-as-you-use seems appealing for biometry research units which have only limited access to university or corporate data center resources or grids. This case study compares the costs of an existing heterogeneous on-site hardware pool in a Medical Biometry and Statistics department to a comparable AWS offer. The "total cost of ownership", including all direct costs, is determined for the on-site hardware, and hourly prices are derived, based on actual system utilization during the year 2011. Indirect costs, which are difficult to quantify are not included in this comparison, but nevertheless some rough guidance from our experience is given. To indicate the scale of costs for a methodological research project, a simulation study of a permutation-based statistical approach is performed using AWS and on-site hardware. In the presented case, with a system utilization of 25-30 percent and 3-5-year amortization, on-site hardware can result in smaller costs, compared to hourly rental in the cloud dependent on the instance chosen. Renting cloud instances with sufficient main memory is a deciding factor in this comparison. Costs for on-site hardware may vary, depending on the specific infrastructure at a research unit, but have only moderate impact on the overall comparison and subsequent decision for obtaining affordable scientific computing resources. Overall utilization has a much stronger impact as it determines the actual computing hours needed per year. Taking this into ac count, cloud computing might still be a viable option for projects with limited maturity, or as a supplement for short peaks in demand.

  8. High-Efficient Low-Cost Photovoltaics Recent Developments

    CERN Document Server

    Petrova-Koch, Vesselinka; Goetzberger, Adolf

    2009-01-01

    A bird's-eye view of the development and problems of recent photovoltaic cells and systems and prospects for Si feedstock is presented. High-efficient low-cost PV modules, making use of novel efficient solar cells (based on c-Si or III-V materials), and low cost solar concentrators are in the focus of this book. Recent developments of organic photovoltaics, which is expected to overcome its difficulties and to enter the market soon, are also included.

  9. The concept of computer software designed to identify and analyse logistics costs in agricultural enterprises

    Directory of Open Access Journals (Sweden)

    Karol Wajszczyk

    2009-01-01

    Full Text Available The study comprised research, development and computer programming works concerning the development of a concept for the IT tool to be used in the identification and analysis of logistics costs in agricultural enterprises in terms of the process-based approach. As a result of research and programming work an overall functional and IT concept of software was developed for the identification and analysis of logistics costs for agricultural enterprises.

  10. A low-cost vector processor boosting compute-intensive image processing operations

    Science.gov (United States)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  11. High resolution computed tomography of positron emitters

    International Nuclear Information System (INIS)

    Derenzo, S.E.; Budinger, T.F.; Cahoon, J.L.; Huesman, R.H.; Jackson, H.G.

    1976-10-01

    High resolution computed transaxial radionuclide tomography has been performed on phantoms containing positron-emitting isotopes. The imaging system consisted of two opposing groups of eight NaI(Tl) crystals 8 mm x 30 mm x 50 mm deep and the phantoms were rotated to measure coincident events along 8960 projection integrals as they would be measured by a 280-crystal ring system now under construction. The spatial resolution in the reconstructed images is 7.5 mm FWHM at the center of the ring and approximately 11 mm FWHM at a radius of 10 cm. We present measurements of imaging and background rates under various operating conditions. Based on these measurements, the full 280-crystal system will image 10,000 events per sec with 400 μCi in a section 1 cm thick and 20 cm in diameter. We show that 1.5 million events are sufficient to reliably image 3.5-mm hot spots with 14-mm center-to-center spacing and isolated 9-mm diameter cold spots in phantoms 15 to 20 cm in diameter

  12. Concept for high speed computer printer

    Science.gov (United States)

    Stephens, J. W.

    1970-01-01

    Printer uses Kerr cell as light shutter for controlling the print on photosensitive paper. Applied to output data transfer, the information transfer rate of graphic computer printers could be increased to speeds approaching the data transfer rate of computer central processors /5000 to 10,000 lines per minute/.

  13. Novel Low Cost, High Reliability Wind Turbine Drivetrain

    Energy Technology Data Exchange (ETDEWEB)

    Chobot, Anthony; Das, Debarshi; Mayer, Tyler; Markey, Zach; Martinson, Tim; Reeve, Hayden; Attridge, Paul; El-Wardany, Tahany

    2012-09-13

    Clipper Windpower, in collaboration with United Technologies Research Center, the National Renewable Energy Laboratory, and Hamilton Sundstrand Corporation, developed a low-cost, deflection-compliant, reliable, and serviceable chain drive speed increaser. This chain and sprocket drivetrain design offers significant breakthroughs in the areas of cost and serviceability and addresses the key challenges of current geared and direct-drive systems. The use of gearboxes has proven to be challenging; the large torques and bending loads associated with use in large multi-MW wind applications have generally limited demonstrated lifetime to 8-10 years [1]. The large cost of gearbox replacement and the required use of large, expensive cranes can result in gearbox replacement costs on the order of $1M, representing a significant impact to overall cost of energy (COE). Direct-drive machines eliminate the gearbox, thereby targeting increased reliability and reduced life-cycle cost. However, the slow rotational speeds require very large and costly generators, which also typically have an undesirable dependence on expensive rare-earth magnet materials and large structural penalties for precise air gap control. The cost of rare-earth materials has increased 20X in the last 8 years representing a key risk to ever realizing the promised cost of energy reductions from direct-drive generators. A common challenge to both geared and direct drive architectures is a limited ability to manage input shaft deflections. The proposed Clipper drivetrain is deflection-compliant, insulating later drivetrain stages and generators from off-axis loads. The system is modular, allowing for all key parts to be removed and replaced without the use of a high capacity crane. Finally, the technology modularity allows for scalability and many possible drivetrain topologies. These benefits enable reductions in drivetrain capital cost by 10.0%, levelized replacement and O&M costs by 26.7%, and overall cost of

  14. Computational Sensing Using Low-Cost and Mobile Plasmonic Readers Designed by Machine Learning

    KAUST Repository

    Ballard, Zachary S.

    2017-01-27

    Plasmonic sensors have been used for a wide range of biological and chemical sensing applications. Emerging nanofabrication techniques have enabled these sensors to be cost-effectively mass manufactured onto various types of substrates. To accompany these advances, major improvements in sensor read-out devices must also be achieved to fully realize the broad impact of plasmonic nanosensors. Here, we propose a machine learning framework which can be used to design low-cost and mobile multispectral plasmonic readers that do not use traditionally employed bulky and expensive stabilized light sources or high-resolution spectrometers. By training a feature selection model over a large set of fabricated plasmonic nanosensors, we select the optimal set of illumination light-emitting diodes needed to create a minimum-error refractive index prediction model, which statistically takes into account the varied spectral responses and fabrication-induced variability of a given sensor design. This computational sensing approach was experimentally validated using a modular mobile plasmonic reader. We tested different plasmonic sensors with hexagonal and square periodicity nanohole arrays and revealed that the optimal illumination bands differ from those that are “intuitively” selected based on the spectral features of the sensor, e.g., transmission peaks or valleys. This framework provides a universal tool for the plasmonics community to design low-cost and mobile multispectral readers, helping the translation of nanosensing technologies to various emerging applications such as wearable sensing, personalized medicine, and point-of-care diagnostics. Beyond plasmonics, other types of sensors that operate based on spectral changes can broadly benefit from this approach, including e.g., aptamer-enabled nanoparticle assays and graphene-based sensors, among others.

  15. Performance Assessment of a Custom, Portable, and Low-Cost Brain-Computer Interface Platform.

    Science.gov (United States)

    McCrimmon, Colin M; Fu, Jonathan Lee; Wang, Ming; Lopes, Lucas Silva; Wang, Po T; Karimi-Bidhendi, Alireza; Liu, Charles Y; Heydari, Payam; Nenadic, Zoran; Do, An Hong

    2017-10-01

    Conventional brain-computer interfaces (BCIs) are often expensive, complex to operate, and lack portability, which confines their use to laboratory settings. Portable, inexpensive BCIs can mitigate these problems, but it remains unclear whether their low-cost design compromises their performance. Therefore, we developed a portable, low-cost BCI and compared its performance to that of a conventional BCI. The BCI was assembled by integrating a custom electroencephalogram (EEG) amplifier with an open-source microcontroller and a touchscreen. The function of the amplifier was first validated against a commercial bioamplifier, followed by a head-to-head comparison between the custom BCI (using four EEG channels) and a conventional 32-channel BCI. Specifically, five able-bodied subjects were cued to alternate between hand opening/closing and remaining motionless while the BCI decoded their movement state in real time and provided visual feedback through a light emitting diode. Subjects repeated the above task for a total of 10 trials, and were unaware of which system was being used. The performance in each trial was defined as the temporal correlation between the cues and the decoded states. The EEG data simultaneously acquired with the custom and commercial amplifiers were visually similar and highly correlated ( ρ = 0.79). The decoding performances of the custom and conventional BCIs averaged across trials and subjects were 0.70 ± 0.12 and 0.68 ± 0.10, respectively, and were not significantly different. The performance of our portable, low-cost BCI is comparable to that of the conventional BCIs. Platforms, such as the one developed here, are suitable for BCI applications outside of a laboratory.

  16. Trends in high-performance computing for engineering calculations.

    Science.gov (United States)

    Giles, M B; Reguly, I

    2014-08-13

    High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  17. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  18. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    International Nuclear Information System (INIS)

    Capone, V; Esposito, R; Pardi, S; Taurino, F; Tortone, G

    2012-01-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  19. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Science.gov (United States)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  20. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  1. Peregrine System | High-Performance Computing | NREL

    Science.gov (United States)

    classes of nodes that users access: Login Nodes Peregrine has four login nodes, each of which has Intel E5 /scratch file systems, the /mss file system is mounted on all login nodes. Compute Nodes Peregrine has 2592

  2. An Alternative Method for Computing Unit Costs and Productivity Ratios. AIR 1984 Annual Forum Paper.

    Science.gov (United States)

    Winstead, Wayland H.; And Others

    An alternative measure for evaluating the performance of academic departments was studied. A comparison was made with the traditional manner for computing unit costs and productivity ratios: prorating the salary and effort of each faculty member to each course level based on the personal mix of course taught. The alternative method used averaging…

  3. Effectiveness of Multimedia Elements in Computer Supported Instruction: Analysis of Personalization Effects, Students' Performances and Costs

    Science.gov (United States)

    Zaidel, Mark; Luo, XiaoHui

    2010-01-01

    This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…

  4. The cognitive dynamics of computer science cost-effective large scale software development

    CERN Document Server

    De Gyurky, Szabolcs Michael; John Wiley & Sons

    2006-01-01

    This book has three major objectives: To propose an ontology for computer software; To provide a methodology for development of large software systems to cost and schedule that is based on the ontology; To offer an alternative vision regarding the development of truly autonomous systems.

  5. Low-cost addition-subtraction sequences for the final exponentiation computation in pairings

    DEFF Research Database (Denmark)

    Guzmán-Trampe, Juan E; Cruz-Cortéz, Nareli; Dominguez Perez, Luis

    2014-01-01

    In this paper, we address the problem of finding low cost addition–subtraction sequences for situations where a doubling step is significantly cheaper than a non-doubling one. One application of this setting appears in the computation of the final exponentiation step of the reduced Tate pairing d...

  6. Low cost, scalable proteomics data analysis using Amazon's cloud computing services and open source search algorithms.

    Science.gov (United States)

    Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N

    2009-06-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).

  7. Bringing together high energy physicist and computer scientist

    International Nuclear Information System (INIS)

    Bock, R.K.

    1989-01-01

    The Oxford Conference on Computing in High Energy Physics approached the physics and computing issues with the question, ''Can computer science help?'' always in mind. This summary is a personal recollection of what I considered to be the highlights of the conference: the parts which contributed to my own learning experience. It can be used as a general introduction to the following papers, or as a brief overview of the current states of computer science within high energy physics. (orig.)

  8. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  9. Low-cost computer mouse for the elderly or disabled in Taiwan.

    Science.gov (United States)

    Chen, C-C; Chen, W-L; Chen, B-N; Shih, Y-Y; Lai, J-S; Chen, Y-L

    2014-01-01

    A mouse is an important communication interface between a human and a computer, but it is still difficult to use for the elderly or disabled. To develop a low-cost computer mouse auxiliary tool. The principal structure of the low-cost mouse auxiliary tool is the IR (infrared ray) array module and the Wii icon sensor module, which combine with reflective tape and the SQL Server database. This has several benefits including cheap hardware cost, fluent control, prompt response, adaptive adjustment and portability. Also, it carries the game module with the function of training and evaluation; to the trainee, it is really helpful to upgrade the sensitivity of consciousness/sense and the centralization of attention. The intervention phase/maintenance phase, with regard to clicking accuracy and use of time, p value (p< 0.05) reach the level of significance. The development of the low cost adaptive computer mouse auxiliary tool was completed during the study and was also verified as having the characteristics of low cost, easy operation and the adaptability. To patients with physical disabilities, if they have independent control action parts of their limbs, the mouse auxiliary tool is suitable for them to use, i.e. the user only needs to paste the reflective tape by the independent control action parts of the body to operate the mouse auxiliary tool.

  10. The high cost of low-acuity ICU outliers.

    Science.gov (United States)

    Dahl, Deborah; Wojtal, Greg G; Breslow, Michael J; Holl, Randy; Huguez, Debra; Stone, David; Korpi, Gloria

    2012-01-01

    Direct variable costs were determined on each hospital day for all patients with an intensive care unit (ICU) stay in four Phoenix-area hospital ICUs. Average daily direct variable cost in the four ICUs ranged from $1,436 to $1,759 and represented 69.4 percent and 45.7 percent of total hospital stay cost for medical and surgical patients, respectively. Daily ICU cost and length of stay (LOS) were higher in patients with higher ICU admission acuity of illness as measured by the APACHE risk prediction methodology; 16.2 percent of patients had an ICU stay in excess of six days, and these LOS outliers accounted for 56.7 percent of total ICU cost. While higher-acuity patients were more likely to be ICU LOS outliers, 11.1 percent of low-risk patients were outliers. The low-risk group included 69.4 percent of the ICU population and accounted for 47 percent of all LOS outliers. Low-risk LOS outliers accounted for 25.3 percent of ICU cost and incurred fivefold higher hospital stay costs and mortality rates. These data suggest that severity of illness is an important determinant of daily resource consumption and LOS, regardless of whether the patient arrives in the ICU with high acuity or develops complications that increase acuity. The finding that a substantial number of long-stay patients come into the ICU with low acuity and deteriorate after ICU admission is not widely recognized and represents an important opportunity to improve patient outcomes and lower costs. ICUs should consider adding low-risk LOS data to their quality and financial performance reports.

  11. Low Cost Lithography Tool for High Brightness LED Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Andrew Hawryluk; Emily True

    2012-06-30

    The objective of this activity was to address the need for improved manufacturing tools for LEDs. Improvements include lower cost (both capital equipment cost reductions and cost-ofownership reductions), better automation and better yields. To meet the DOE objective of $1- 2/kilolumen, it will be necessary to develop these highly automated manufacturing tools. Lithography is used extensively in the fabrication of high-brightness LEDs, but the tools used to date are not scalable to high-volume manufacturing. This activity addressed the LED lithography process. During R&D and low volume manufacturing, most LED companies use contact-printers. However, several industries have shown that these printers are incompatible with high volume manufacturing and the LED industry needs to evolve to projection steppers. The need for projection lithography tools for LED manufacturing is identified in the Solid State Lighting Manufacturing Roadmap Draft, June 2009. The Roadmap states that Projection tools are needed by 2011. This work will modify a stepper, originally designed for semiconductor manufacturing, for use in LED manufacturing. This work addresses improvements to yield, material handling, automation and throughput for LED manufacturing while reducing the capital equipment cost.

  12. A high-performance, low-cost, leading edge discriminator

    Indian Academy of Sciences (India)

    Abstract. A high-performance, low-cost, leading edge discriminator has been designed with a timing performance comparable to state-of-the-art, commercially available discrim- inators. A timing error of 16 ps is achieved under ideal operating conditions. Under more realistic operating conditions the discriminator displays a ...

  13. Command vector memory systems: high performance at low cost

    OpenAIRE

    Corbal San Adrián, Jesús; Espasa Sans, Roger; Valero Cortés, Mateo

    1998-01-01

    The focus of this paper is on designing both a low cost and high performance, high bandwidth vector memory system that takes advantage of modern commodity SDRAM memory chips. To successfully extract the full bandwidth from SDRAM parts, we propose a new memory system organization based on sending commands to the memory system as opposed to sending individual addresses. A command specifies, in a few bytes, a request for multiple independent memory words. A command is similar to a burst found in...

  14. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  15. High costs of female choice in a lekking lizard.

    Directory of Open Access Journals (Sweden)

    Maren N Vitousek

    2007-06-01

    Full Text Available Although the cost of mate choice is an essential component of the evolution and maintenance of sexual selection, the energetic cost of female choice has not previously been assessed directly. Here we report that females can incur high energetic costs as a result of discriminating among potential mates. We used heart rate biologging to quantify energetic expenditure in lek-mating female Galápagos marine iguanas (Amblyrhynchus cristatus. Receptive females spent 78.9+/-23.2 kJ of energy on mate choice over a 30-day period, which is equivalent to approximately (3/4 of one day's energy budget. Females that spent more time on the territories of high-quality, high-activity males displayed greater energetic expenditure on mate choice, lost more mass, and showed a trend towards producing smaller follicles. Choosy females also appear to face a reduced probability of survival if El Niño conditions occur in the year following breeding. These findings indicate that female choice can carry significant costs, and suggest that the benefits that lek-mating females gain through mating with a preferred male may be higher than previously predicted.

  16. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  17. A low cost computer-controlled electrochemical measurement system for education and research

    International Nuclear Information System (INIS)

    Cottis, R.A.

    1989-01-01

    With the advent of low cost computers of significant processing power, it has become economically attractive, as well as offering practical advantages, to replace conventional electrochemical instrumentation with computer-based equipment. For example, the equipment to be described can perform all of the functions required for the measurement of a potentiodynamic polarization curve, replacing the conventional arrangement of sweep generator, potentiostat and chart recorder at a cost (based on the purchase cost of parts) which is less than that of most chart recorders alone. Additionally the use of computer control at a relatively low level provides a versatility (assuming the development of suitable software) which cannot easily be matched by conventional instruments. As a result of these considerations a simple computer-controlled electrochemical measurement system has been developed, with a primary aim being its use in teaching an MSc class in corrosion science and engineering, with additional applications in MSc and PhD research. For education reasons the design of the user interface has tried to make the internal operation of the unit as obvious as possible, and thereby minimize the tendency for students to treat the unit as a 'black box' with incomprehensible inner workings. This has resulted in a unit in which the three main components of function generator, potentiostat and recorder are presented as independent areas on the front panel, and can be configured by the user in exactly the same way as conventional instruments. (author) 11 figs

  18. A low cost computer-controlled electrochemical measurement system for education and research

    Energy Technology Data Exchange (ETDEWEB)

    Cottis, R A [Manchester Univ. (UK). Inst. of Science and Technology

    1989-01-01

    With the advent of low cost computers of significant processing power, it has become economically attractive, as well as offering practical advantages, to replace conventional electrochemical instrumentation with computer-based equipment. For example, the equipment to be described can perform all of the functions required for the measurement of a potentiodynamic polarization curve, replacing the conventional arrangement of sweep generator, potentiostat and chart recorder at a cost (based on the purchase cost of parts) which is less than that of most chart recorders alone. Additionally the use of computer control at a relatively low level provides a versatility (assuming the development of suitable software) which cannot easily be matched by conventional instruments. As a result of these considerations a simple computer-controlled electrochemical measurement system has been developed, with a primary aim being its use in teaching an MSc class in corrosion science and engineering, with additional applications in MSc and PhD research. For education reasons the design of the user interface has tried to make the internal operation of the unit as obvious as possible, and thereby minimize the tendency for students to treat the unit as a 'black box' with incomprehensible inner workings. This has resulted in a unit in which the three main components of function generator, potentiostat and recorder are presented as independent areas on the front panel, and can be configured by the user in exactly the same way as conventional instruments. (author) 11 figs.

  19. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  20. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  1. Hybrid Cloud Computing Architecture Optimization by Total Cost of Ownership Criterion

    Directory of Open Access Journals (Sweden)

    Elena Valeryevna Makarenko

    2014-12-01

    Full Text Available Achieving the goals of information security is a key factor in the decision to outsource information technology and, in particular, to decide on the migration of organizational data, applications, and other resources to the infrastructure, based on cloud computing. And the key issue in the selection of optimal architecture and the subsequent migration of business applications and data to the cloud organization information environment is the question of the total cost of ownership of IT infrastructure. This paper focuses on solving the problem of minimizing the total cost of ownership cloud.

  2. Low cost phantom for computed radiology; Objeto de teste de baixo custo para radiologia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Travassos, Paulo Cesar B.; Magalhaes, Luis Alexandre G., E-mail: pctravassos@ufrj.br [Universidade do Estado do Rio de Janeiro (IBRGA/UERJ), RJ (Brazil). Laboratorio de Ciencias Radiologicas; Augusto, Fernando M.; Sant' Yves, Thalis L.A.; Goncalves, Elicardo A.S. [Instituto Nacional de Cancer (INCA), Rio de Janeiro, RJ (Brazil); Botelho, Marina A. [Hospital Universitario Pedro Ernesto (UERJ), Rio de Janeiro, RJ (Brazil)

    2012-08-15

    This article presents the results obtained from a low cost phantom, used to analyze Computed Radiology (CR) equipment. The phantom was constructed to test a few parameters related to image quality, as described in [1-9]. Materials which can be easily purchased were used in the construction of the phantom, with total cost of approximately U$100.00. A bar pattern was placed only to verify the efficacy of the grids in the spatial resolution determination, and was not included in the budget because the data was acquired from the grids. (author)

  3. Incentive motivation deficits in schizophrenia reflect effort computation impairments during cost-benefit decision-making.

    Science.gov (United States)

    Fervaha, Gagan; Graff-Guerrero, Ariel; Zakzanis, Konstantine K; Foussias, George; Agid, Ofer; Remington, Gary

    2013-11-01

    Motivational impairments are a core feature of schizophrenia and although there are numerous reports studying this feature using clinical rating scales, objective behavioural assessments are lacking. Here, we use a translational paradigm to measure incentive motivation in individuals with schizophrenia. Sixteen stable outpatients with schizophrenia and sixteen matched healthy controls completed a modified version of the Effort Expenditure for Rewards Task that accounts for differences in motoric ability. Briefly, subjects were presented with a series of trials where they may choose to expend a greater amount of effort for a larger monetary reward versus less effort for a smaller reward. Additionally, the probability of receiving money for a given trial was varied at 12%, 50% and 88%. Clinical and other reward-related variables were also evaluated. Patients opted to expend greater effort significantly less than controls for trials of high, but uncertain (i.e. 50% and 88% probability) incentive value, which was related to amotivation and neurocognitive deficits. Other abnormalities were also noted but were related to different clinical variables such as impulsivity (low reward and 12% probability). These motivational deficits were not due to group differences in reward learning, reward valuation or hedonic capacity. Our findings offer novel support for incentive motivation deficits in schizophrenia. Clinical amotivation is associated with impairments in the computation of effort during cost-benefit decision-making. This objective translational paradigm may guide future investigations of the neural circuitry underlying these motivational impairments. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Low cost, high yield IFE reactors: Revisiting Velikhov's vaporizing blankets

    International Nuclear Information System (INIS)

    Logan, B.G.

    1992-01-01

    The performance (efficiency and cost) of IFE reactors using MHD conversion is explored for target blanket shells of various materials vaporized and ionized by high fusion yields (5 to 500 GJ). A magnetized, prestressed reactor chamber concept is modeled together with previously developed models for the Compact Fusion Advanced Rankine II (CFARII) MHD Balance-of-Plant (BoP). Using conservative 1-D neutronics models, high fusion yields (20 to 80 GJ) are found necessary to heat Flibe, lithium, and lead-lithium blankets to MHD plasma temperatures, at initial solid thicknesses sufficient to capture most of the fusion yield. Advanced drivers/targets would need to be developed to achieve a ''Bang per Buck'' figure-of-merit approx-gt 20 to 40 joules yield per driver $ for this scheme to be competitive with these blanket materials. Alternatively, more realistic neutronics models and better materials such as lithium hydride may lower the minimum required yields substantially. The very low CFARII BoP costs (contributing only 3 mills/kWehr to CoE) allows this type of reactor, given sufficient advances that non-driver costs dominate, to ultimately produce electricity at a much lower cost than any current nuclear plant

  5. Cost-effective computational method for radiation heat transfer in semi-crystalline polymers

    Science.gov (United States)

    Boztepe, Sinan; Gilblas, Rémi; de Almeida, Olivier; Le Maoult, Yannick; Schmidt, Fabrice

    2018-05-01

    This paper introduces a cost-effective numerical model for infrared (IR) heating of semi-crystalline polymers. For the numerical and experimental studies presented here semi-crystalline polyethylene (PE) was used. The optical properties of PE were experimentally analyzed under varying temperature and the obtained results were used as input in the numerical studies. The model was built based on optically homogeneous medium assumption whereas the strong variation in the thermo-optical properties of semi-crystalline PE under heating was taken into account. Thus, the change in the amount radiative energy absorbed by the PE medium was introduced in the model induced by its temperature-dependent thermo-optical properties. The computational study was carried out considering an iterative closed-loop computation, where the absorbed radiation was computed using an in-house developed radiation heat transfer algorithm -RAYHEAT- and the computed results was transferred into the commercial software -COMSOL Multiphysics- for solving transient heat transfer problem to predict temperature field. The predicted temperature field was used to iterate the thermo-optical properties of PE that varies under heating. In order to analyze the accuracy of the numerical model experimental analyses were carried out performing IR-thermographic measurements during the heating of the PE plate. The applicability of the model in terms of computational cost, number of numerical input and accuracy was highlighted.

  6. Low-cost autonomous perceptron neural network inspired by quantum computation

    Science.gov (United States)

    Zidan, Mohammed; Abdel-Aty, Abdel-Haleem; El-Sadek, Alaa; Zanaty, E. A.; Abdel-Aty, Mahmoud

    2017-11-01

    Achieving low cost learning with reliable accuracy is one of the important goals to achieve intelligent machines to save time, energy and perform learning process over limited computational resources machines. In this paper, we propose an efficient algorithm for a perceptron neural network inspired by quantum computing composite from a single neuron to classify inspirable linear applications after a single training iteration O(1). The algorithm is applied over a real world data set and the results are outer performs the other state-of-the art algorithms.

  7. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  8. High performance computations using dynamical nucleation theory

    International Nuclear Information System (INIS)

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  9. Computational cost for detecting inspiralling binaries using a network of laser interferometric detectors

    International Nuclear Information System (INIS)

    Pai, Archana; Bose, Sukanta; Dhurandhar, Sanjeev

    2002-01-01

    We extend a coherent network data-analysis strategy developed earlier for detecting Newtonian waveforms to the case of post-Newtonian (PN) waveforms. Since the PN waveform depends on the individual masses of the inspiralling binary, the parameter-space dimension increases by one from that of the Newtonian case. We obtain the number of templates and estimate the computational costs for PN waveforms: for a lower mass limit of 1M o-dot , for LIGO-I noise and with 3% maximum mismatch, the online computational speed requirement for single detector is a few Gflops; for a two-detector network it is hundreds of Gflops and for a three-detector network it is tens of Tflops. Apart from idealistic networks, we obtain results for realistic networks comprising of LIGO and VIRGO. Finally, we compare costs incurred in a coincidence detection strategy with those incurred in the coherent strategy detailed above

  10. Computational cost for detecting inspiralling binaries using a network of laser interferometric detectors

    CERN Document Server

    Pai, A; Dhurandhar, S V

    2002-01-01

    We extend a coherent network data-analysis strategy developed earlier for detecting Newtonian waveforms to the case of post-Newtonian (PN) waveforms. Since the PN waveform depends on the individual masses of the inspiralling binary, the parameter-space dimension increases by one from that of the Newtonian case. We obtain the number of templates and estimate the computational costs for PN waveforms: for a lower mass limit of 1M sub o sub - sub d sub o sub t , for LIGO-I noise and with 3% maximum mismatch, the online computational speed requirement for single detector is a few Gflops; for a two-detector network it is hundreds of Gflops and for a three-detector network it is tens of Tflops. Apart from idealistic networks, we obtain results for realistic networks comprising of LIGO and VIRGO. Finally, we compare costs incurred in a coincidence detection strategy with those incurred in the coherent strategy detailed above.

  11. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  12. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  13. COST EFFECTIVE AND HIGH RESOLUTION SUBSURFACE CHARACTERIZATION USING HYDRAULIC TOMOGRAPHY

    Science.gov (United States)

    2017-08-01

    objective of this project is to provide the DoD and its remediation contractors with the HT technology for delineating the spatial distribution of...STATEMENT Approved for public release; distribution is unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Hydraulic Tomography ( HT ) is a high-resolution...performance of subsurface remedial actions at environmental sites. The good technical performance and cost-effectiveness of HT have been demonstrated in

  14. Patents Associated with High-Cost Drugs in Australia

    OpenAIRE

    Christie, Andrew F.; Dent, Chris; McIntyre, Peter; Wilson, Lachlan; Studdert, David M.

    2013-01-01

    Australia, like most countries, faces high and rapidly-rising drug costs. There are longstanding concerns about pharmaceutical companies inappropriately extending their monopoly position by "evergreening" blockbuster drugs, through misuse of the patent system. There is, however, very little empirical information about this behaviour. We fill the gap by analysing all of the patents associated with 15 of the costliest drugs in Australia over the last 20 years. Specifically, we search the patent...

  15. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    International Nuclear Information System (INIS)

    Hardie, R.W.

    1982-02-01

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case

  16. Reduced computational cost in the calculation of worst case response time for real time systems

    OpenAIRE

    Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo

    2009-01-01

    Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...

  17. The application of cloud computing to scientific workflows: a study of cost and performance.

    Science.gov (United States)

    Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S

    2013-01-28

    The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.

  18. Norplant's high cost may prohibit use in Title 10 clinics.

    Science.gov (United States)

    1991-04-01

    The article discusses the prohibitive cost of Norplant for the Title 10 low-income population served in public family planning clinics in the U.S. It is argued that it's unfair for U.S. users to pay $350 to Wyeth- Ayerst when another pharmaceutical company provides developing countries with Norplant at a cost of $14 - 23. Although the public sector and private foundations funded the development, it was explained that the company needs to recoup the investment in training and education. Medicaid and third party payers such as insurance companies will reimburse for the higher price, but if the public sector price is lowered, then the company would not make a profit and everyone would have argued for the reimbursement at the lower cost. It was suggested that a boycott of American Home Products, Wyeth-Ayerst's parent company, be made. Public family planning providers who are particularly low in funding reflect that their budget of $30,000 would only provide 85 users, and identified in this circumstance by drug abusers and multiple pregnancy women, and the need for teenagers remains unfulfilled. Another remarked that the client population served is 4700 with $54,000 in funding, which is already accounted for. The general trend of comments was that for low income women the cost is to high.

  19. A practical technique for benefit-cost analysis of computer-aided design and drafting systems

    International Nuclear Information System (INIS)

    Shah, R.R.; Yan, G.

    1979-03-01

    Analysis of benefits and costs associated with the operation of Computer-Aided Design and Drafting Systems (CADDS) are needed to derive economic justification for acquiring new systems, as well as to evaluate the performance of existing installations. In practice, however, such analyses are difficult to perform since most technical and economic advantages of CADDS are ΣirreduciblesΣ, i.e. cannot be readily translated into monetary terms. In this paper, a practical technique for economic analysis of CADDS in a drawing office environment is presented. A Σworst caseΣ approach is taken since increase in productivity of existing manpower is the only benefit considered, while all foreseen costs are taken into account. Methods of estimating benefits and costs are described. The procedure for performing the analysis is illustrated by a case study based on the drawing office activities at Atomic Energy of Canada Limited. (auth)

  20. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Science.gov (United States)

    2010-01-01

    ... Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions (a... loan cost rate for various transactions, as well as instructions, explanations, and examples for.... (2) Term of the transaction. For purposes of total annual loan cost disclosures, the term of a...

  1. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  2. Analysis for the high-level waste disposal cost object

    International Nuclear Information System (INIS)

    Kim, S. K.; Lee, J. R.; Choi, J. W.; Han, P. S.

    2003-01-01

    The purpose of this study is to analyse the ratio of cost object in terms of the disposal cost estimation. According to the result, the ratio of operating cost is the most significant object in total cost. There are a lot of differences between the disposal costs and product costs in view of their constituents. While the product costs may be classified by the direct materials cost, direct manufacturing labor cost, and factory overhead the disposal cost factors should be constituted by the technical factors and the non-technical factors

  3. Parallel Computing:. Some Activities in High Energy Physics

    Science.gov (United States)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  4. Cheap imports next ordeal for Europe's high-cost producers

    International Nuclear Information System (INIS)

    Chynoweth, E.

    1993-01-01

    About one-third of Europe's 34 cracker and downstream units lost money in the final quarter of 1992, says Chem Systems (London). Average return on capital employed is negative - at the same level as in the gloomy days of the early 1980s - yet average operating rates are 80% now, compared with 65% a decade ago. Margins at what Chem Systems calls leader cracks (naphtha-based units that use good modern practices) are DM42/m.t. ethylene, DM100/m.t. less than they were in 1991. The consultant firm's recent report, European Petrochemical Strategy in the 1990s, suggests closure of 5%-10% of high-cost production. But, Chem Systems director Roger Longley states: We are not advocating wholesale closure. There are a small number (of plants) where additional investment would not payback that would be economical to shut. Cost reduction through mergers and acquisitions and operational changes is much more important, especially from an international aspect, Longley says. One thing people do not fully appreciate is that Europe is a high-cost region for petrochemical production, he adds. Traditionally, Europe exports 5% of its ethylene output, now it needs to tolerate cheap imports

  5. Patents associated with high-cost drugs in Australia.

    Directory of Open Access Journals (Sweden)

    Andrew F Christie

    Full Text Available Australia, like most countries, faces high and rapidly-rising drug costs. There are longstanding concerns about pharmaceutical companies inappropriately extending their monopoly position by "evergreening" blockbuster drugs, through misuse of the patent system. There is, however, very little empirical information about this behaviour. We fill the gap by analysing all of the patents associated with 15 of the costliest drugs in Australia over the last 20 years. Specifically, we search the patent register to identify all the granted patents that cover the active pharmaceutical ingredient of the high-cost drugs. Then, we classify the patents by type, and identify their owners. We find a mean of 49 patents associated with each drug. Three-quarters of these patents are owned by companies other than the drug's originator. Surprisingly, the majority of all patents are owned by companies that do not have a record of developing top-selling drugs. Our findings show that a multitude of players seek monopoly control over innovations to blockbuster drugs. Consequently, attempts to control drug costs by mitigating misuse of the patent system are likely to miss the mark if they focus only on the patenting activities of originators.

  6. Patents associated with high-cost drugs in Australia.

    Science.gov (United States)

    Christie, Andrew F; Dent, Chris; McIntyre, Peter; Wilson, Lachlan; Studdert, David M

    2013-01-01

    Australia, like most countries, faces high and rapidly-rising drug costs. There are longstanding concerns about pharmaceutical companies inappropriately extending their monopoly position by "evergreening" blockbuster drugs, through misuse of the patent system. There is, however, very little empirical information about this behaviour. We fill the gap by analysing all of the patents associated with 15 of the costliest drugs in Australia over the last 20 years. Specifically, we search the patent register to identify all the granted patents that cover the active pharmaceutical ingredient of the high-cost drugs. Then, we classify the patents by type, and identify their owners. We find a mean of 49 patents associated with each drug. Three-quarters of these patents are owned by companies other than the drug's originator. Surprisingly, the majority of all patents are owned by companies that do not have a record of developing top-selling drugs. Our findings show that a multitude of players seek monopoly control over innovations to blockbuster drugs. Consequently, attempts to control drug costs by mitigating misuse of the patent system are likely to miss the mark if they focus only on the patenting activities of originators.

  7. Chest Computed Tomographic Image Screening for Cystic Lung Diseases in Patients with Spontaneous Pneumothorax Is Cost Effective.

    Science.gov (United States)

    Gupta, Nishant; Langenderfer, Dale; McCormack, Francis X; Schauer, Daniel P; Eckman, Mark H

    2017-01-01

    Patients without a known history of lung disease presenting with a spontaneous pneumothorax are generally diagnosed as having primary spontaneous pneumothorax. However, occult diffuse cystic lung diseases such as Birt-Hogg-Dubé syndrome (BHD), lymphangioleiomyomatosis (LAM), and pulmonary Langerhans cell histiocytosis (PLCH) can also first present with a spontaneous pneumothorax, and their early identification by high-resolution computed tomographic (HRCT) chest imaging has implications for subsequent management. The objective of our study was to evaluate the cost-effectiveness of HRCT chest imaging to facilitate early diagnosis of LAM, BHD, and PLCH. We constructed a Markov state-transition model to assess the cost-effectiveness of screening HRCT to facilitate early diagnosis of diffuse cystic lung diseases in patients presenting with an apparent primary spontaneous pneumothorax. Baseline data for prevalence of BHD, LAM, and PLCH and rates of recurrent pneumothoraces in each of these diseases were derived from the literature. Costs were extracted from 2014 Medicare data. We compared a strategy of HRCT screening followed by pleurodesis in patients with LAM, BHD, or PLCH versus conventional management with no HRCT screening. In our base case analysis, screening for the presence of BHD, LAM, or PLCH in patients presenting with a spontaneous pneumothorax was cost effective, with a marginal cost-effectiveness ratio of $1,427 per quality-adjusted life-year gained. Sensitivity analysis showed that screening HRCT remained cost effective for diffuse cystic lung diseases prevalence as low as 0.01%. HRCT image screening for BHD, LAM, and PLCH in patients with apparent primary spontaneous pneumothorax is cost effective. Clinicians should consider performing a screening HRCT in patients presenting with apparent primary spontaneous pneumothorax.

  8. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  9. COMPUTER SYSTEM FOR DETERMINATION OF COST DAILY SUGAR PRODUCTION AND INCIDENTS DECISIONS FOR COMPANIES SUGAR (SACODI

    Directory of Open Access Journals (Sweden)

    Alejandro Álvarez-Navarro

    2016-01-01

    Full Text Available The process of sugar production is complex; anything that affects this chain has direct repercussions in the sugar production’s costs, it’s synthetic and decisive indicator for the taking of decisions. Currently the Cuban sugar factory determine this cost weekly, for that, its process of taking of decisions is affected. Looking for solutions to this problem, the present work, being part of a territorial project approved by CITMA, intended to calculate the cost of production daily, weekly, monthly and accumulated until indicated date, according to an adaptation to the methodology used by the National Costs System of sugarcane created by the MINAZ, it’s supported by a computer system denominated SACODI. This adaptation registers the physical and economic indicators of all direct and indirect expenses of the  sugarcane and besides this information generates an economic-mathematical model of goal programming whose solution indicates the best balance in amount of sugar of the entities of the sugar factory, in short term. The implementation of the system in the sugar factory «Julio A. Mella» in Santiago de Cuba in the sugar-cane production 08-09 produced an estimate of decrease of the cost of until 3,5 % for the taking of better decisions. 

  10. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  11. MONITOR: A computer model for estimating the costs of an integral monitored retrievable storage facility

    International Nuclear Information System (INIS)

    Reimus, P.W.; Sevigny, N.L.; Schutz, M.E.; Heller, R.A.

    1986-12-01

    The MONITOR model is a FORTRAN 77 based computer code that provides parametric life-cycle cost estimates for a monitored retrievable storage (MRS) facility. MONITOR is very flexible in that it can estimate the costs of an MRS facility operating under almost any conceivable nuclear waste logistics scenario. The model can also accommodate input data of varying degrees of complexity and detail (ranging from very simple to more complex) which makes it ideal for use in the MRS program, where new designs and new cost data are frequently offered for consideration. MONITOR can be run as an independent program, or it can be interfaced with the Waste System Transportation and Economic Simulation (WASTES) model, a program that simulates the movement of waste through a complete nuclear waste disposal system. The WASTES model drives the MONITOR model by providing it with the annual quantities of waste that are received, stored, and shipped at the MRS facility. Three runs of MONITOR are documented in this report. Two of the runs are for Version 1 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2A (backup) version of the MRS cost estimate. In one of these runs MONITOR was run as an independent model, and in the other run MONITOR was run using an input file generated by the WASTES model. The two runs correspond to identical cases, and the fact that they gave identical results verified that the code performed the same calculations in both modes of operation. The third run was made for Version 2 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2B (integral) version of the MRS cost estimate. This run was made with MONITOR being run as an independent model. The results of several cases have been verified by hand calculations

  12. Agglomeration Economies and the High-Tech Computer

    OpenAIRE

    Wallace, Nancy E.; Walls, Donald

    2004-01-01

    This paper considers the effects of agglomeration on the production decisions of firms in the high-tech computer cluster. We build upon an alternative definition of the high-tech computer cluster developed by Bardhan et al. (2003) and we exploit a new data source, the National Establishment Time-Series (NETS) Database, to analyze the spatial distribution of firms in this industry. An essential contribution of this research is the recognition that high-tech firms are heterogeneous collections ...

  13. Computer-aided engineering in High Energy Physics

    International Nuclear Information System (INIS)

    Bachy, G.; Hauviller, C.; Messerli, R.; Mottier, M.

    1988-01-01

    Computing, standard tool for a long time in the High Energy Physics community, is being slowly introduced at CERN in the mechanical engineering field. The first major application was structural analysis followed by Computer-Aided Design (CAD). Development work is now progressing towards Computer-Aided Engineering around a powerful data base. This paper gives examples of the power of this approach applied to engineering for accelerators and detectors

  14. Computational tools for high-throughput discovery in biology

    OpenAIRE

    Jones, Neil Christopher

    2007-01-01

    High throughput data acquisition technology has inarguably transformed the landscape of the life sciences, in part by making possible---and necessary---the computational disciplines of bioinformatics and biomedical informatics. These fields focus primarily on developing tools for analyzing data and generating hypotheses about objects in nature, and it is in this context that we address three pressing problems in the fields of the computational life sciences which each require computing capaci...

  15. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  16. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  17. Compact High Performance Spectrometers Using Computational Imaging, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Energy Research Company (ERCo), in collaboration with CoVar Applied Technologies, proposes the development of high throughput, compact, and lower cost spectrometers...

  18. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  19. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  20. High temperature estimation through computer vision

    International Nuclear Information System (INIS)

    Segovia de los R, J.A.

    1996-01-01

    The form recognition process has between his purposes to conceive and to analyze the classification algorithms applied to the image representations, sounds or signals of any kind. In a process with a thermal plasma reactor in which cannot be employed conventional dispositives or methods for the measurement of the very high temperatures. The goal of this work was to determine these temperatures in an indirect way. (Author)

  1. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  2. High Performance, Low Cost Hydrogen Generation from Renewable Energy

    Energy Technology Data Exchange (ETDEWEB)

    Ayers, Katherine [Proton OnSite; Dalton, Luke [Proton OnSite; Roemer, Andy [Proton OnSite; Carter, Blake [Proton OnSite; Niedzwiecki, Mike [Proton OnSite; Manco, Judith [Proton OnSite; Anderson, Everett [Proton OnSite; Capuano, Chris [Proton OnSite; Wang, Chao-Yang [Penn State University; Zhao, Wei [Penn State University

    2014-02-05

    Renewable hydrogen from proton exchange membrane (PEM) electrolysis is gaining strong interest in Europe, especially in Germany where wind penetration is already at critical levels for grid stability. For this application as well as biogas conversion and vehicle fueling, megawatt (MW) scale electrolysis is required. Proton has established a technology roadmap to achieve the necessary cost reductions and manufacturing scale up to maintain U.S. competitiveness in these markets. This project represents a highly successful example of the potential for cost reduction in PEM electrolysis, and provides the initial stack design and manufacturing development for Proton’s MW scale product launch. The majority of the program focused on the bipolar assembly, from electrochemical modeling to subscale stack development through prototyping and manufacturing qualification for a large active area cell platform. Feasibility for an advanced membrane electrode assembly (MEA) with 50% reduction in catalyst loading was also demonstrated. Based on the progress in this program and other parallel efforts, H2A analysis shows the status of PEM electrolysis technology dropping below $3.50/kg production costs, exceeding the 2015 target.

  3. A cost of sexual attractiveness to high-fitness females.

    Directory of Open Access Journals (Sweden)

    Tristan A F Long

    2009-12-01

    Full Text Available Adaptive mate choice by females is an important component of sexual selection in many species. The evolutionary consequences of male mate preferences, however, have received relatively little study, especially in the context of sexual conflict, where males often harm their mates. Here, we describe a new and counterintuitive cost of sexual selection in species with both male mate preference and sexual conflict via antagonistic male persistence: male mate choice for high-fecundity females leads to a diminished rate of adaptive evolution by reducing the advantage to females of expressing beneficial genetic variation. We then use a Drosophila melanogaster model system to experimentally test the key prediction of this theoretical cost: that antagonistic male persistence is directed toward, and harms, intrinsically higher-fitness females more than it does intrinsically lower-fitness females. This asymmetry in male persistence causes the tails of the population's fitness distribution to regress towards the mean, thereby reducing the efficacy of natural selection. We conclude that adaptive male mate choice can lead to an important, yet unappreciated, cost of sex and sexual selection.

  4. The Computer Industry. High Technology Industries: Profiles and Outlooks.

    Science.gov (United States)

    International Trade Administration (DOC), Washington, DC.

    A series of meetings was held to assess future problems in United States high technology, particularly in the fields of robotics, computers, semiconductors, and telecommunications. This report, which focuses on the computer industry, includes a profile of this industry and the papers presented by industry speakers during the meetings. The profile…

  5. An Introduction to Computing: Content for a High School Course.

    Science.gov (United States)

    Rogers, Jean B.

    A general outline of the topics that might be covered in a computers and computing course for high school students is provided. Topics are listed in the order in which they should be taught, and the relative amount of time to be spent on each topic is suggested. Seven units are included in the course outline: (1) general introduction, (2) using…

  6. Improvements in high energy computed tomography

    International Nuclear Information System (INIS)

    Burstein, P.; Krieger, A.; Annis, M.

    1984-01-01

    In computerized axial tomography employed with large relatively dense objects such as a solid fuel rocket engine, using high energy x-rays, such as a 15 MeV source, a collimator is employed with an acceptance angle substantially less than 1 0 , in a preferred embodiment 7 minutes of a degree. In a preferred embodiment, the collimator may be located between the object and the detector, although in other embodiments, a pre-collimator may also be used, that is between the x-ray source and the object being illuminated. (author)

  7. A high level language for a high performance computer

    Science.gov (United States)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  8. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  9. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  10. High resolution computed tomography of auditory ossicles

    International Nuclear Information System (INIS)

    Isono, M.; Murata, K.; Ohta, F.; Yoshida, A.; Ishida, O.; Kinki Univ., Osaka

    1990-01-01

    Auditory ossicular sections were scanned at section thicknesses (mm)/section interspaces (mm) of 1.5/1.5 (61 patients), 1.0/1.0 (13 patients) or 1.5/1.0 (33 patients). At any type of section thickness/interspace, the malleal and incudal structures were observed with almost equal frequency. The region of the incudostapedial joint and each component part of the stapes were shown more frequently at a section interspace of 1.0 mm than at 1.5 mm. The visualization frequency of each auditory ossicular component on two or more serial sections was investigated. At a section thickness/section interspace of 1.5/1.5, the visualization rates were low except for large components such as the head of the malleus and the body of the incus, but at a slice interspace of 1.0 mm, they were high for most components of the auditory ossicles. (orig.)

  11. Cost-effectiveness of routine computed tomography in the evaluation of idiopathic unilateral vocal fold paralysis.

    Science.gov (United States)

    Hojjat, Houmehr; Svider, Peter F; Folbe, Adam J; Raza, Syed N; Carron, Michael A; Shkoukani, Mahdi A; Merati, Albert L; Mayerhoff, Ross M

    2017-02-01

    To evaluate the cost-effectiveness of routine computed tomography (CT) in individuals with unilateral vocal fold paralysis (UVFP) STUDY DESIGN: Health Economics Decision Tree Analysis METHODS: A decision tree was constructed to determine the incremental cost-effectiveness ratio (ICER) of CT imaging in UVFP patients. Univariate sensitivity analysis was utilized to calculate what the probability of having an etiology of the paralysis discovered would have to be to make CT with contrast more cost-effective than no imaging. We used two studies examining findings in UVFP patients. The decision pathways were utilizing CT neck with intravenous contrast after diagnostic laryngoscopy versus laryngoscopy alone. The probability of detecting an etiology for UVFP and associated costs were extracted to construct the decision tree. The only incorrect diagnosis was missing a mass in the no-imaging decision branch, which rendered an effectiveness of 0. The ICER of using CT was $3,306, below most acceptable willingness-to-pay (WTP) thresholds. Additionally, univariate sensitivity analysis indicated that at the WTP threshold of $30,000, obtaining CT imaging was the most cost-effective choice when the probability of having a lesion was above 1.7%. Multivariate probabilistic sensitivity analysis with Monte Carlo simulations also showed that at the WTP of $30,000, CT scanning is more cost-effective, with 99.5% certainty. Particularly in the current healthcare environment characterized by increasing consciousness of utilization defensive medicine, economic evaluations represent evidence-based findings that can be employed to facilitate appropriate decision making and enhance physician-patient communication. This economic evaluation strongly supports obtaining CT imaging in patients with newly diagnosed UVFP. 2c. Laryngoscope, 2016 127:440-444, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  12. The cost-effectiveness and cost-utility of high-dose palliative radiotherapy for advanced non-small-cell lung cancer

    International Nuclear Information System (INIS)

    Coy, Peter; Schaafsma, Joseph; Schofield, John A.

    2000-01-01

    Purpose: To compute cost-effectiveness/cost-utility (CE/CU) ratios, from the treatment clinic and societal perspectives, for high-dose palliative radiotherapy treatment (RT) for advanced non-small-cell lung cancer (NSCLC) against best supportive care (BSC) as comparator, and thereby demonstrate a method for computing CE/CU ratios when randomized clinical trial (RCT) data cannot be generated. Methods and Materials: Unit cost estimates based on an earlier reported 1989-90 analysis of treatment costs at the Vancouver Island Cancer Centre, Victoria, British Columbia, Canada, are updated to 1997-1998 and then used to compute the incremental cost of an average dose of high-dose palliative RT. The incremental number of life days and quality-adjusted life days (QALDs) attributable to treatment are from earlier reported regression analyses of the survival and quality-of-life data from patients who enrolled prospectively in a lung cancer management cost-effectiveness study at the clinic over a 2-year period from 1990 to 1992. Results: The baseline CE and CU ratios are $9245 Cdn per life year (LY) and $12,836 per quality-adjusted life year (QALY), respectively, from the clinic perspective; and $12,253/LY and $17,012/QALY, respectively, from the societal perspective. Multivariate sensitivity analysis for the CE ratio produces a range of $5513-28,270/LY from the clinic perspective, and $7307-37,465/LY from the societal perspective. Similar calculations for the CU ratio produce a range of $7205-37,134/QALY from the clinic perspective, and $9550-49,213/QALY from the societal perspective. Conclusion: The cost effectiveness and cost utility of high-dose palliative RT for advanced NSCLC compares favorably with the cost effectiveness of other forms of treatment for NSCLC, of treatments of other forms of cancer, and of many other commonly used medical interventions; and lies within the US $50,000/QALY benchmark often cited for cost-effective care

  13. Social incidence and economic costs of carbon limits; A computable general equilibrium analysis for Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Stephan, G.; Van Nieuwkoop, R.; Wiedmer, T. (Institute for Applied Microeconomics, Univ. of Bern (Switzerland))

    1992-01-01

    Both distributional and allocational effects of limiting carbon dioxide emissions in a small and open economy are discussed. It starts from the assumption that Switzerland attempts to stabilize its greenhouse gas emissions over the next 25 years, and evaluates costs and benefits of the respective reduction programme. From a methodological viewpoint, it is illustrated how a computable general equilibrium approach can be adopted for identifying economic effects of cutting greenhouse gas emissions on the national level. From a political economy point of view it considers the social incidence of a greenhouse policy. It shows in particular that public acceptance can be increased and economic costs of greenhouse policies can be reduced, if carbon taxes are accompanied by revenue redistribution. 8 tabs., 1 app., 17 refs.

  14. Software Applications on the Peregrine System | High-Performance Computing

    Science.gov (United States)

    Algebraic Modeling System (GAMS) Statistics and analysis High-level modeling system for mathematical reactivity. Gurobi Optimizer Statistics and analysis Solver for mathematical programming LAMMPS Chemistry and , reactivities, and vibrational, electronic and NMR spectra. R Statistical Computing Environment Statistics and

  15. The comparison of high and standard definition computed ...

    African Journals Online (AJOL)

    The comparison of high and standard definition computed tomography techniques regarding coronary artery imaging. A Aykut, D Bumin, Y Omer, K Mustafa, C Meltem, C Orhan, U Nisa, O Hikmet, D Hakan, K Mert ...

  16. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa; Parashar, Manish; Kim, Hyunjoo; Jordan, Kirk E.; Sachdeva, Vipin; Sexton, James; Jamjoom, Hani; Shae, Zon-Yin; Pencheva, Gergina; Tavakoli, Reza; Wheeler, Mary F.

    2012-01-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a

  17. On the role of cost-sensitive learning in multi-class brain-computer interfaces.

    Science.gov (United States)

    Devlaminck, Dieter; Waegeman, Willem; Wyns, Bart; Otte, Georges; Santens, Patrick

    2010-06-01

    Brain-computer interfaces (BCIs) present an alternative way of communication for people with severe disabilities. One of the shortcomings in current BCI systems, recently put forward in the fourth BCI competition, is the asynchronous detection of motor imagery versus resting state. We investigated this extension to the three-class case, in which the resting state is considered virtually lying between two motor classes, resulting in a large penalty when one motor task is misclassified into the other motor class. We particularly focus on the behavior of different machine-learning techniques and on the role of multi-class cost-sensitive learning in such a context. To this end, four different kernel methods are empirically compared, namely pairwise multi-class support vector machines (SVMs), two cost-sensitive multi-class SVMs and kernel-based ordinal regression. The experimental results illustrate that ordinal regression performs better than the other three approaches when a cost-sensitive performance measure such as the mean-squared error is considered. By contrast, multi-class cost-sensitive learning enables us to control the number of large errors made between two motor tasks.

  18. High contrast computed tomography with synchrotron radiation

    Science.gov (United States)

    Itai, Yuji; Takeda, Tohoru; Akatsuka, Takao; Maeda, Tomokazu; Hyodo, Kazuyuki; Uchida, Akira; Yuasa, Tetsuya; Kazama, Masahiro; Wu, Jin; Ando, Masami

    1995-02-01

    This article describes a new monochromatic x-ray CT system using synchrotron radiation with applications in biomedical diagnosis which is currently under development. The system is designed to provide clear images and to detect contrast materials at low concentration for the quantitative functional evaluation of organs in correspondence with their anatomical structures. In this system, with x-ray energy changing from 30 to 52 keV, images can be obtained to detect various contrast materials (iodine, barium, and gadolinium), and K-edge energy subtraction is applied. Herein, the features of the new system designed to enhance the advantages of SR are reported. With the introduction of a double-crystal monochromator, the high-order x-ray contamination is eliminated. The newly designed CCD detector with a wide dynamic range of 60 000:1 has a spatial resolution of 200 μm. The resulting image quality, which is expected to show improved contrast and spatial resolution, is currently under investigation.

  19. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  20. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  1. Low Cost DIY Lenses kit For High School Teaching

    Science.gov (United States)

    Thepnurat, Meechai; Saphet, Parinya; Tong-on, Anusorn

    2017-09-01

    A set of lenses was fabricated from a low cost materials in a DIY (Do it yourself) process. The purpose was to demonstrate to teachers and students in high schools how to construct lenses by themselves with the local available materials. The lenses could be applied in teaching Physics, about the nature of a lens such as focal length and light rays passing through lenses in either direction, employing a set of simple laser pointers. This instrumental kit was made from a transparent 2 mm thick of acrylic Perspex. It was cut into rectangular pieces with dimensions of 2x15 cm2 and bent into curved shape by a hot air blower on a cylindrical wooden rod with curvature radii of about 3-4.5 cm. Then a pair of these Perspex were formed into a hollow thick lenses with a base supporting platform, so that any appropriate liquids could be filled in. The focal length of the lens was measured from laser beam drawing on a paper. The refractive index, n (n) of a filling liquid could be calculated from the measured focal length (f). The kit was low cost and DIY but was greatly applicable for optics teaching in high school laboratory.

  2. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    Science.gov (United States)

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  3. High burnup models in computer code fair

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, B K; Swami Prasad, P; Kushwaha, H S; Mahajan, S C; Kakodar, A [Bhabha Atomic Research Centre, Bombay (India)

    1997-08-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ``Light water reactor fuel rod modelling code evaluation`` and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs.

  4. High burnup models in computer code fair

    International Nuclear Information System (INIS)

    Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.

    1997-01-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs

  5. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  6. Computational Comparison of Several Greedy Algorithms for the Minimum Cost Perfect Matching Problem on Large Graphs

    DEFF Research Database (Denmark)

    Wøhlk, Sanne; Laporte, Gilbert

    2017-01-01

    The aim of this paper is to computationally compare several algorithms for the Minimum Cost Perfect Matching Problem on an undirected complete graph. Our work is motivated by the need to solve large instances of the Capacitated Arc Routing Problem (CARP) arising in the optimization of garbage...... collection in Denmark. Common heuristics for the CARP involve the optimal matching of the odd-degree nodes of a graph. The algorithms used in the comparison include the CPLEX solution of an exact formulation, the LEDA matching algorithm, a recent implementation of the Blossom algorithm, as well as six...

  7. Expedited Holonomic Quantum Computation via Net Zero-Energy-Cost Control in Decoherence-Free Subspace.

    Science.gov (United States)

    Pyshkin, P V; Luo, Da-Wei; Jing, Jun; You, J Q; Wu, Lian-Ao

    2016-11-25

    Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol.

  8. Cost-effective computations with boundary interface operators in elliptic problems

    International Nuclear Information System (INIS)

    Khoromskij, B.N.; Mazurkevich, G.E.; Nikonov, E.G.

    1993-01-01

    The numerical algorithm for fast computations with interface operators associated with the elliptic boundary value problems (BVP) defined on step-type domains is presented. The algorithm is based on the asymptotically almost optimal technique developed for treatment of the discrete Poincare-Steklov (PS) operators associated with the finite-difference Laplacian on rectangles when using the uniform grid with a 'displacement by h/2'. The approach can be regarded as an extension of the method proposed for the partial solution of the finite-difference Laplace equation to the case of displaced grids and mixed boundary conditions. It is shown that the action of the PS operator for the Dirichlet problem and mixed BVP can be computed with expenses of the order of O(Nlog 2 N) both for arithmetical operations and computer memory needs, where N is the number of unknowns on the rectangle boundary. The single domain algorithm is applied to solving the multidomain elliptic interface problems with piecewise constant coefficients. The numerical experiments presented confirm almost linear growth of the computational costs and memory needs with respect to the dimension of the discrete interface problem. 14 refs., 3 figs., 4 tabs

  9. Direct costs and cost-effectiveness of dual-source computed tomography and invasive coronary angiography in patients with an intermediate pretest likelihood for coronary artery disease.

    Science.gov (United States)

    Dorenkamp, Marc; Bonaventura, Klaus; Sohns, Christian; Becker, Christoph R; Leber, Alexander W

    2012-03-01

    The study aims to determine the direct costs and comparative cost-effectiveness of latest-generation dual-source computed tomography (DSCT) and invasive coronary angiography for diagnosing coronary artery disease (CAD) in patients suspected of having this disease. The study was based on a previously elaborated cohort with an intermediate pretest likelihood for CAD and on complementary clinical data. Cost calculations were based on a detailed analysis of direct costs, and generally accepted accounting principles were applied. Based on Bayes' theorem, a mathematical model was used to compare the cost-effectiveness of both diagnostic approaches. Total costs included direct costs, induced costs and costs of complications. Effectiveness was defined as the ability of a diagnostic test to accurately identify a patient with CAD. Direct costs amounted to €98.60 for DSCT and to €317.75 for invasive coronary angiography. Analysis of model calculations indicated that cost-effectiveness grew hyperbolically with increasing prevalence of CAD. Given the prevalence of CAD in the study cohort (24%), DSCT was found to be more cost-effective than invasive coronary angiography (€970 vs €1354 for one patient correctly diagnosed as having CAD). At a disease prevalence of 49%, DSCT and invasive angiography were equally effective with costs of €633. Above a threshold value of disease prevalence of 55%, proceeding directly to invasive coronary angiography was more cost-effective than DSCT. With proper patient selection and consideration of disease prevalence, DSCT coronary angiography is cost-effective for diagnosing CAD in patients with an intermediate pretest likelihood for it. However, the range of eligible patients may be smaller than previously reported.

  10. How do high cost-sharing policies for physician care affect total care costs among people with chronic disease?

    Science.gov (United States)

    Xin, Haichang; Harman, Jeffrey S; Yang, Zhou

    2014-01-01

    This study examines whether high cost-sharing in physician care is associated with a differential impact on total care costs by health status. Total care includes physician care, emergency room (ER) visits and inpatient care. Since high cost-sharing policies can reduce needed care as well as unneeded care use, it raises the concern whether these policies are a good strategy for controlling costs among chronically ill patients. This study used the 2007 Medical Expenditure Panel Survey data with a cross-sectional study design. Difference in difference (DID), instrumental variable technique, two-part model, and bootstrap technique were employed to analyze cost data. Chronically ill individuals' probability of reducing any overall care costs was significantly less than healthier individuals (beta = 2.18, p = 0.04), while the integrated DID estimator from split results indicated that going from low cost-sharing to high cost-sharing significantly reduced costs by $12,853.23 more for sick people than for healthy people (95% CI: -$17,582.86, -$8,123.60). This greater cost reduction in total care among sick people likely resulted from greater cost reduction in physician care, and may have come at the expense of jeopardizing health outcomes by depriving patients of needed care. Thus, these policies would be inappropriate in the short run, and unlikely in the long run to control health plans costs among chronically ill individuals. A generous benefit design with low cost-sharing policies in physician care or primary care is recommended for both health plans and chronically ill individuals, to save costs and protect these enrollees' health status.

  11. Feasibility Study and Cost Benefit Analysis of Thin-Client Computer System Implementation Onboard United States Navy Ships

    National Research Council Canada - National Science Library

    Arbulu, Timothy D; Vosberg, Brian J

    2007-01-01

    The purpose of this MBA project was to conduct a feasibility study and a cost benefit analysis of using thin-client computer systems instead of traditional networks onboard United States Navy ships...

  12. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  13. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  14. A low cost high resolution pattern generator for electron-beam lithography

    International Nuclear Information System (INIS)

    Pennelli, G.; D'Angelo, F.; Piotto, M.; Barillaro, G.; Pellegrini, B.

    2003-01-01

    A simple, very low cost pattern generator for electron-beam lithography is presented. When it is applied to a scanning electron microscope, the system allows a high precision positioning of the beam for lithography of very small structures. Patterns are generated by a suitable software implemented on a personal computer, by using very simple functions, allowing an easy development of new writing strategies for a great adaptability to different user necessities. Hardware solutions, as optocouplers and battery supply, have been implemented for reduction of noise and disturbs on the voltages controlling the positioning of the beam

  15. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Corones, James [Krell Institute

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  16. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej

    2014-06-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  17. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej; Kuźnik, Krzysztof M.; Paszyński, Maciej R.; Calo, Victor M.; Pardo, D.

    2014-01-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  18. Computational sensing of herpes simplex virus using a cost-effective on-chip microscope

    KAUST Repository

    Ray, Aniruddha

    2017-07-03

    Caused by the herpes simplex virus (HSV), herpes is a viral infection that is one of the most widespread diseases worldwide. Here we present a computational sensing technique for specific detection of HSV using both viral immuno-specificity and the physical size range of the viruses. This label-free approach involves a compact and cost-effective holographic on-chip microscope and a surface-functionalized glass substrate prepared to specifically capture the target viruses. To enhance the optical signatures of individual viruses and increase their signal-to-noise ratio, self-assembled polyethylene glycol based nanolenses are rapidly formed around each virus particle captured on the substrate using a portable interface. Holographic shadows of specifically captured viruses that are surrounded by these self-assembled nanolenses are then reconstructed, and the phase image is used for automated quantification of the size of each particle within our large field-of-view, ~30 mm2. The combination of viral immuno-specificity due to surface functionalization and the physical size measurements enabled by holographic imaging is used to sensitively detect and enumerate HSV particles using our compact and cost-effective platform. This computational sensing technique can find numerous uses in global health related applications in resource-limited environments.

  19. Cost Savings Associated with the Adoption of a Cloud Computing Data Transfer System for Trauma Patients.

    Science.gov (United States)

    Feeney, James M; Montgomery, Stephanie C; Wolf, Laura; Jayaraman, Vijay; Twohig, Michael

    2016-09-01

    Among transferred trauma patients, challenges with the transfer of radiographic studies include problems loading or viewing the studies at the receiving hospitals, and problems manipulating, reconstructing, or evalu- ating the transferred images. Cloud-based image transfer systems may address some ofthese problems. We reviewed the charts of patients trans- ferred during one year surrounding the adoption of a cloud computing data transfer system. We compared the rates of repeat imaging before (precloud) and af- ter (postcloud) the adoption of the cloud-based data transfer system. During the precloud period, 28 out of 100 patients required 90 repeat studies. With the cloud computing transfer system in place, three out of 134 patients required seven repeat films. There was a statistically significant decrease in the proportion of patients requiring repeat films (28% to 2.2%, P < .0001). Based on an annualized volume of 200 trauma patient transfers, the cost savings estimated using three methods of cost analysis, is between $30,272 and $192,453.

  20. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  1. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  2. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  3. A novel cost based model for energy consumption in cloud computing.

    Science.gov (United States)

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.

  4. Omniscopes: Large area telescope arrays with only NlogN computational cost

    International Nuclear Information System (INIS)

    Tegmark, Max; Zaldarriaga, Matias

    2010-01-01

    We show that the class of antenna layouts for telescope arrays allowing cheap analysis hardware (with correlator cost scaling as NlogN rather than N 2 with the number of antennas N) is encouragingly large, including not only previously discussed rectangular grids but also arbitrary hierarchies of such grids, with arbitrary rotations and shears at each level. We show that all correlations for such a 2D array with an n-level hierarchy can be efficiently computed via a fast Fourier transform in not two but 2n dimensions. This can allow major correlator cost reductions for science applications requiring exquisite sensitivity at widely separated angular scales, for example, 21 cm tomography (where short baselines are needed to probe the cosmological signal and long baselines are needed for point source removal), helping enable future 21 cm experiments with thousands or millions of cheap dipolelike antennas. Such hierarchical grids combine the angular resolution advantage of traditional array layouts with the cost advantage of a rectangular fast Fourier transform telescope. We also describe an algorithm for how a subclass of hierarchical arrays can efficiently use rotation synthesis to produce global sky maps with minimal noise and a well-characterized synthesized beam.

  5. Offshore compression system design for low cost high and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Carlos J. Rocha de O.; Carrijo Neto, Antonio Dias; Cordeiro, Alexandre Franca [Chemtech Engineering Services and Software Ltd., Rio de Janeiro, RJ (Brazil). Special Projects Div.], Emails: antonio.carrijo@chemtech.com.br, carlos.rocha@chemtech.com.br, alexandre.cordeiro@chemtech.com.br

    2010-07-01

    In the offshore oil fields, the oil streams coming from the wells usually have significant amounts of gas. This gas is separated at low pressure and has to be compressed to the export pipeline pressure, usually at high pressure to reduce the needed diameter of the pipelines. In the past, this gases where flared, but nowadays there are a increasing pressure for the energy efficiency improvement of the oil rigs and the use of this gaseous fraction. The most expensive equipment of this kind of plant are the compression and power generation systems, being the second a strong function of the first, because the most power consuming equipment are the compressors. For this reason, the optimization of the compression system in terms of efficiency and cost are determinant to the plant profit. The availability of the plants also have a strong influence in the plant profit, specially in gas fields where the products have a relatively low aggregated value, compared to oil. Due this, the third design variable of the compression system becomes the reliability. As high the reliability, larger will be the plant production. The main ways to improve the reliability of compression system are the use of multiple compression trains in parallel, in a 2x50% or 3x50% configuration, with one in stand-by. Such configurations are possible and have some advantages and disadvantages, but the main side effect is the increase of the cost. This is the offshore common practice, but that does not always significantly improve the plant availability, depending of the previous process system. A series arrangement and a critical evaluation of the overall system in some cases can provide a cheaper system with equal or better performance. This paper shows a case study of the procedure to evaluate a compression system design to improve the reliability but without extreme cost increase, balancing the number of equipment, the series or parallel arrangement, and the driver selection. Two cases studies will be

  6. Capital cost: high and low sulfur coal plants-1200 MWe. [High sulfur coal

    Energy Technology Data Exchange (ETDEWEB)

    1977-01-01

    This Commercial Electric Power Cost Study for 1200 MWe (Nominal) high and low sulfur coal plants consists of three volumes. The high sulfur coal plant is described in Volumes I and II, while Volume III describes the low sulfur coal plant. The design basis and cost estimate for the 1232 MWe high sulfur coal plant is presented in Volume I, and the drawings, equipment list and site description are contained in Volume II. The reference design includes a lime flue gas desulfurization system. A regenerative sulfur dioxide removal system using magnesium oxide is also presented as an alternate in Section 7 Volume II. The design basis, drawings and summary cost estimate for a 1243 MWe low sulfur coal plant are presented in Volume III. This information was developed by redesigning the high sulfur coal plant for burning low sulfur sub-bituminous coal. These coal plants utilize a mechanical draft (wet) cooling tower system for condenser heat removal. Costs of alternate cooling systems are provided in Report No. 7 in this series of studies of costs of commercial electrical power plants.

  7. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  8. The costs and cost-efficiency of providing food through schools in areas of high food insecurity.

    Science.gov (United States)

    Gelli, Aulo; Al-Shaiba, Najeeb; Espejo, Francisco

    2009-03-01

    The provision of food in and through schools has been used to support the education, health, and nutrition of school-aged children. The monitoring of financial inputs into school health and nutrition programs is critical for a number of reasons, including accountability, transparency, and equity. Furthermore, there is a gap in the evidence on the costs, cost-efficiency, and cost-effectiveness of providing food through schools, particularly in areas of high food insecurity. To estimate the programmatic costs and cost-efficiency associated with providing food through schools in food-insecure, developing-country contexts, by analyzing global project data from the World Food Programme (WFP). Project data, including expenditures and number of schoolchildren covered, were collected through project reports and validated through WFP Country Office records. Yearly project costs per schoolchild were standardized over a set number of feeding days and the amount of energy provided by the average ration. Output metrics, such as tonnage, calories, and micronutrient content, were used to assess the cost-efficiency of the different delivery mechanisms. The average yearly expenditure per child, standardized over a 200-day on-site feeding period and an average ration, excluding school-level costs, was US$21.59. The costs varied substantially according to choice of food modality, with fortified biscuits providing the least costly option of about US$11 per year and take-home rations providing the most expensive option at approximately US$52 per year. Comparisons across the different food modalities suggested that fortified biscuits provide the most cost-efficient option in terms of micronutrient delivery (particularly vitamin A and iodine), whereas on-site meals appear to be more efficient in terms of calories delivered. Transportation and logistics costs were the main drivers for the high costs. The choice of program objectives will to a large degree dictate the food modality

  9. New Challenges for Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Santoro, Alberto

    2003-01-01

    In view of the new scientific programs established for the LHC (Large Hadron Collider) era, the way to face the technological challenges in computing was develop a new concept of GRID computing. We show some examples and, in particular, a proposal for high energy physicists in countries like Brazil. Due to the big amount of data and the need of close collaboration it will be impossible to work in research centers and universities very far from Fermilab or CERN unless a GRID architecture is built. An important effort is being made by the international community to up to date their computing infrastructure and networks

  10. OMNET - high speed data communications for PDP-11 computers

    International Nuclear Information System (INIS)

    Parkman, C.F.; Lee, J.G.

    1979-12-01

    Omnet is a high speed data communications network designed at CERN for PDP-11 computers. It has grown from a link multiplexor system built for a CII 10070 computer into a full multi-point network, to which some fifty computers are now connected. It provides communications facilities for several large experimental installations as well as many smaller systems and has connections to all parts of the CERN site. The transmission protocol is discussed and brief details are given of the hardware and software used in its implementation. Also described is the gateway interface to the CERN packet switching network, 'Cernet'. (orig.)

  11. Scilab software as an alternative low-cost computing in solving the linear equations problem

    Science.gov (United States)

    Agus, Fahrul; Haviluddin

    2017-02-01

    Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.

  12. Assessing Tax Form Distribution Costs: A Proposed Method for Computing the Dollar Value of Tax Form Distribution in a Public Library.

    Science.gov (United States)

    Casey, James B.

    1998-01-01

    Explains how a public library can compute the actual cost of distributing tax forms to the public by listing all direct and indirect costs and demonstrating the formulae and necessary computations. Supplies directions for calculating costs involved for all levels of staff as well as associated public relations efforts, space, and utility costs.…

  13. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  14. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  15. Experimental high energy physics and modern computer architectures

    International Nuclear Information System (INIS)

    Hoek, J.

    1988-06-01

    The paper examines how experimental High Energy Physics can use modern computer architectures efficiently. In this connection parallel and vector architectures are investigated, and the types available at the moment for general use are discussed. A separate section briefly describes some architectures that are either a combination of both, or exemplify other architectures. In an appendix some directions in which computing seems to be developing in the USA are mentioned. (author)

  16. Use of several Cloud Computing approaches for climate modelling: performance, costs and opportunities

    Science.gov (United States)

    Perez Montes, Diego A.; Añel Cabanelas, Juan A.; Wallom, David C. H.; Arribas, Alberto; Uhe, Peter; Caderno, Pablo V.; Pena, Tomas F.

    2017-04-01

    Cloud Computing is a technological option that offers great possibilities for modelling in geosciences. We have studied how two different climate models, HadAM3P-HadRM3P and CESM-WACCM, can be adapted in two different ways to run on Cloud Computing Environments from three different vendors: Amazon, Google and Microsoft. Also, we have evaluated qualitatively how the use of Cloud Computing can affect the allocation of resources by funding bodies and issues related to computing security, including scientific reproducibility. Our first experiments were developed using the well known ClimatePrediction.net (CPDN), that uses BOINC, over the infrastructure from two cloud providers, namely Microsoft Azure and Amazon Web Services (hereafter AWS). For this comparison we ran a set of thirteen month climate simulations for CPDN in Azure and AWS using a range of different virtual machines (VMs) for HadRM3P (50 km resolution over South America CORDEX region) nested in the global atmosphere-only model HadAM3P. These simulations were run on a single processor and took between 3 and 5 days to compute depending on the VM type. The last part of our simulation experiments was running WACCM over different VMS on the Google Compute Engine (GCE) and make a comparison with the supercomputer (SC) Finisterrae1 from the Centro de Supercomputacion de Galicia. It was shown that GCE gives better performance than the SC for smaller number of cores/MPI tasks but the model throughput shows clearly how the SC performance is better after approximately 100 cores (related with network speed and latency differences). From a cost point of view, Cloud Computing moves researchers from a traditional approach where experiments were limited by the available hardware resources to monetary resources (how many resources can be afforded). As there is an increasing movement and recommendation for budgeting HPC projects on this technology (budgets can be calculated in a more realistic way) we could see a shift on

  17. Proceedings from the conference on high speed computing: High speed computing and national security

    Energy Technology Data Exchange (ETDEWEB)

    Hirons, K.P.; Vigil, M.; Carlson, R. [comps.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  18. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  19. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    Science.gov (United States)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  20. Server Operation and Virtualization to Save Energy and Cost in Future Sustainable Computing

    Directory of Open Access Journals (Sweden)

    Jun-Ho Huh

    2018-06-01

    Full Text Available Since the introduction of the LTE (Long Term Evolution service, we have lived in a time of expanding amounts of data. The amount of data produced has increased every year with the increase of smart phone distribution in particular. Telecommunication service providers have to struggle to secure sufficient network capacity in order to maintain quick access to necessary data by consumers. Nonetheless, maintaining the maximum capacity and bandwidth at all times requires considerable cost and excessive equipment. Therefore, to solve such a problem, telecommunication service providers need to maintain an appropriate level of network capacity and to provide sustainable service to customers through a quick network development in case of shortage. So far, telecommunication service providers have bought and used the network equipment directly produced by network equipment manufacturers such as Ericsson, Nokia, Cisco, and Samsung. Since the equipment is specialized for networking, which satisfied consumers with their excellent performances, they are very costly because they are developed with advanced technologies. Moreover, it takes much time due to the purchase process wherein the telecommunication service providers place an order and the manufacturer produces and delivers. Accordingly, there are cases that require signaling and two-way data traffic as well as capacity because of the diversity of IoT devices. For these purposes, the need for NFV (Network Function Virtualization is raised. Equipment virtualization is performed so that it is operated on an x86-based compatible server instead of working on the network equipment manufacturer’s dedicated hardware. By operating in some compatible servers, it can reduce the wastage of hardware and cope with the change thanks to quick hardware development. This study proposed an efficient system of reducing cost in network server operation using such NFV technology and found that the cost was reduced by 24

  1. Cost, affordability and cost-effectiveness of strategies to control tuberculosis in countries with high HIV prevalence

    Directory of Open Access Journals (Sweden)

    Williams Brian G

    2005-12-01

    Full Text Available Abstract Background The HIV epidemic has caused a dramatic increase in tuberculosis (TB in East and southern Africa. Several strategies have the potential to reduce the burden of TB in high HIV prevalence settings, and cost and cost-effectiveness analyses can help to prioritize them when budget constraints exist. However, published cost and cost-effectiveness studies are limited. Methods Our objective was to compare the cost, affordability and cost-effectiveness of seven strategies for reducing the burden of TB in countries with high HIV prevalence. A compartmental difference equation model of TB and HIV and recent cost data were used to assess the costs (year 2003 US$ prices and effects (TB cases averted, deaths averted, DALYs gained of these strategies in Kenya during the period 2004–2023. Results The three lowest cost and most cost-effective strategies were improving TB cure rates, improving TB case detection rates, and improving both together. The incremental cost of combined improvements to case detection and cure was below US$15 million per year (7.5% of year 2000 government health expenditure; the mean cost per DALY gained of these three strategies ranged from US$18 to US$34. Antiretroviral therapy (ART had the highest incremental costs, which by 2007 could be as large as total government health expenditures in year 2000. ART could also gain more DALYs than the other strategies, at a cost per DALY gained of around US$260 to US$530. Both the costs and effects of treatment for latent tuberculosis infection (TLTI for HIV+ individuals were low; the cost per DALY gained ranged from about US$85 to US$370. Averting one HIV infection for less than US$250 would be as cost-effective as improving TB case detection and cure rates to WHO target levels. Conclusion To reduce the burden of TB in high HIV prevalence settings, the immediate goal should be to increase TB case detection rates and, to the extent possible, improve TB cure rates, preferably

  2. Predicting Future High-Cost Schizophrenia Patients Using High-Dimensional Administrative Data

    Directory of Open Access Journals (Sweden)

    Yajuan Wang

    2017-06-01

    Full Text Available BackgroundThe burden of serious and persistent mental illness such as schizophrenia is substantial and requires health-care organizations to have adequate risk adjustment models to effectively allocate their resources to managing patients who are at the greatest risk. Currently available models underestimate health-care costs for those with mental or behavioral health conditions.ObjectivesThe study aimed to develop and evaluate predictive models for identification of future high-cost schizophrenia patients using advanced supervised machine learning methods.MethodsThis was a retrospective study using a payer administrative database. The study cohort consisted of 97,862 patients diagnosed with schizophrenia (ICD9 code 295.* from January 2009 to June 2014. Training (n = 34,510 and study evaluation (n = 30,077 cohorts were derived based on 12-month observation and prediction windows (PWs. The target was average total cost/patient/month in the PW. Three models (baseline, intermediate, final were developed to assess the value of different variable categories for cost prediction (demographics, coverage, cost, health-care utilization, antipsychotic medication usage, and clinical conditions. Scalable orthogonal regression, significant attribute selection in high dimensions method, and random forests regression were used to develop the models. The trained models were assessed in the evaluation cohort using the regression R2, patient classification accuracy (PCA, and cost accuracy (CA. The model performance was compared to the Centers for Medicare & Medicaid Services Hierarchical Condition Categories (CMS-HCC model.ResultsAt top 10% cost cutoff, the final model achieved 0.23 R2, 43% PCA, and 63% CA; in contrast, the CMS-HCC model achieved 0.09 R2, 27% PCA with 45% CA. The final model and the CMS-HCC model identified 33 and 22%, respectively, of total cost at the top 10% cost cutoff.ConclusionUsing advanced feature selection leveraging detailed

  3. 42 CFR 412.84 - Payment for extraordinarily high-cost cases (cost outliers).

    Science.gov (United States)

    2010-10-01

    ... obtains accurate data with which to calculate either an operating or capital cost-to-charge ratio (or both... outlier payments will be based on operating and capital cost-to-charge ratios calculated based on a ratio... outliers). 412.84 Section 412.84 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF...

  4. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  5. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  6. High-End Computing Challenges in Aerospace Design and Engineering

    Science.gov (United States)

    Bailey, F. Ronald

    2004-01-01

    High-End Computing (HEC) has had significant impact on aerospace design and engineering and is poised to make even more in the future. In this paper we describe four aerospace design and engineering challenges: Digital Flight, Launch Simulation, Rocket Fuel System and Digital Astronaut. The paper discusses modeling capabilities needed for each challenge and presents projections of future near and far-term HEC computing requirements. NASA's HEC Project Columbia is described and programming strategies presented that are necessary to achieve high real performance.

  7. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  8. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  9. The ongoing investigation of high performance parallel computing in HEP

    CERN Document Server

    Peach, Kenneth J; Böck, R K; Dobinson, Robert W; Hansroul, M; Norton, Alan Robert; Willers, Ian Malcolm; Baud, J P; Carminati, F; Gagliardi, F; McIntosh, E; Metcalf, M; Robertson, L; CERN. Geneva. Detector Research and Development Committee

    1993-01-01

    Past and current exploitation of parallel computing in High Energy Physics is summarized and a list of R & D projects in this area is presented. The applicability of new parallel hardware and software to physics problems is investigated, in the light of the requirements for computing power of LHC experiments and the current trends in the computer industry. Four main themes are discussed (possibilities for a finer grain of parallelism; fine-grain communication mechanism; usable parallel programming environment; different programming models and architectures, using standard commercial products). Parallel computing technology is potentially of interest for offline and vital for real time applications in LHC. A substantial investment in applications development and evaluation of state of the art hardware and software products is needed. A solid development environment is required at an early stage, before mainline LHC program development begins.

  10. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  11. Computer Security: SAHARA - Security As High As Reasonably Achievable

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    History has shown us time and again that our computer systems, computing services and control systems have digital security deficiencies. Too often we deploy stop-gap solutions and improvised hacks, or we just accept that it is too late to change things.    In my opinion, this blatantly contradicts the professionalism we show in our daily work. Other priorities and time pressure force us to ignore security or to consider it too late to do anything… but we can do better. Just look at how “safety” is dealt with at CERN! “ALARA” (As Low As Reasonably Achievable) is the objective set by the CERN HSE group when considering our individual radiological exposure. Following this paradigm, and shifting it from CERN safety to CERN computer security, would give us “SAHARA”: “Security As High As Reasonably Achievable”. In other words, all possible computer security measures must be applied, so long as ...

  12. Visualization of flaws within heavy section ultrasonic test blocks using high energy computed tomography

    International Nuclear Information System (INIS)

    House, M.B.; Ross, D.M.; Janucik, F.X.; Friedman, W.D.; Yancey, R.N.

    1996-05-01

    The feasibility of high energy computed tomography (9 MeV) to detect volumetric and planar discontinuities in large pressure vessel mock-up blocks was studied. The data supplied by the manufacturer of the test blocks on the intended flaw geometry were compared to manual, contact ultrasonic test and computed tomography test data. Subsequently, a visualization program was used to construct fully three-dimensional morphological information enabling interactive data analysis on the detected flaws. Density isosurfaces show the relative shape and location of the volumetric defects within the mock-up blocks. Such a technique may be used to qualify personnel or newly developed ultrasonic test methods without the associated high cost of destructive evaluation. Data is presented showing the capability of the volumetric data analysis program to overlay the computed tomography and destructive evaluation (serial metallography) data for a direct, three-dimensional comparison

  13. High Efficiency and Low Cost Thermal Energy Storage System

    Energy Technology Data Exchange (ETDEWEB)

    Sienicki, James J. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Lv, Qiuping [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Moisseytsev, Anton [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Bucknor, Matthew [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division

    2017-09-29

    BgtL, LLC (BgtL) is focused on developing and commercializing its proprietary compact technology for processes in the energy sector. One such application is a compact high efficiency Thermal Energy Storage (TES) system that utilizes the heat of fusion through phase change between solid and liquid to store and release energy at high temperatures and incorporate state-of-the-art insulation to minimize heat dissipation. BgtL’s TES system would greatly improve the economics of existing nuclear and coal-fired power plants by allowing the power plant to store energy when power prices are low and sell power into the grid when prices are high. Compared to existing battery storage technology, BgtL’s novel thermal energy storage solution can be significantly less costly to acquire and maintain, does not have any waste or environmental emissions, and does not deteriorate over time; it can keep constant efficiency and operates cleanly and safely. BgtL’s engineers are experienced in this field and are able to design and engineer such a system to a specific power plant’s requirements. BgtL also has a strong manufacturing partner to fabricate the system such that it qualifies for an ASME code stamp. BgtL’s vision is to be the leading provider of compact systems for various applications including energy storage. BgtL requests that all technical information about the TES designs be protected as proprietary information. To honor that request, only non-proprietay summaries are included in this report.

  14. Unenhanced computed tomography in acute renal colic reduces cost outside radiology department

    DEFF Research Database (Denmark)

    Lauritsen, J.; Andersen, J.R.; Nordling, J.

    2008-01-01

    BACKGROUND: Unenhanced multidetector computed tomography (UMDCT) is well established as the procedure of choice for radiologic evaluation of patients with renal colic. The procedure has both clinical and financial consequences for departments of surgery and radiology. However, the financial effect...... outside the radiology department is poorly elucidated. PURPOSE: To evaluate the financial consequences outside of the radiology department, a retrospective study comparing the ward occupation of patients examined with UMDCT to that of intravenous urography (IVU) was performed. MATERIAL AND METHODS......) saved the hospital USD 265,000 every 6 months compared to the use of IVU. CONCLUSION: Use of UMDCT compared to IVU in patients with renal colic leads to cost savings outside the radiology department Udgivelsesdato: 2008/12...

  15. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  16. High Thermal Conductivity and High Wear Resistance Tool Steels for cost-effective Hot Stamping Tools

    Science.gov (United States)

    Valls, I.; Hamasaiid, A.; Padré, A.

    2017-09-01

    In hot stamping/press hardening, in addition to its shaping function, the tool controls the cycle time, the quality of the stamped components through determining the cooling rate of the stamped blank, the production costs and the feasibility frontier for stamping a given component. During the stamping, heat is extracted from the stamped blank and transported through the tool to the cooling medium in the cooling lines. Hence, the tools’ thermal properties determine the cooling rate of the blank, the heat transport mechanism, stamping times and temperature distribution. The tool’s surface resistance to adhesive and abrasive wear is also an important cost factor, as it determines the tool durability and maintenance costs. Wear is influenced by many tool material parameters, such as the microstructure, composition, hardness level and distribution of strengthening phases, as well as the tool’s working temperature. A decade ago, Rovalma developed a hot work tool steel for hot stamping that features a thermal conductivity of more than double that of any conventional hot work tool steel. Since that time, many complimentary grades have been developed in order to provide tailored material solutions as a function of the production volume, degree of blank cooling and wear resistance requirements, tool geometries, tool manufacturing method, type and thickness of the blank material, etc. Recently, Rovalma has developed a new generation of high thermal conductivity, high wear resistance tool steel grades that enable the manufacture of cost effective tools for hot stamping to increase process productivity and reduce tool manufacturing costs and lead times. Both of these novel grades feature high wear resistance and high thermal conductivity to enhance tool durability and cut cycle times in the production process of hot stamped components. Furthermore, one of these new grades reduces tool manufacturing costs through low tool material cost and hardening through readily

  17. Production of solidified high level wastes: a cost comparison of solidification processes

    International Nuclear Information System (INIS)

    1977-06-01

    Differential cost estimates of the annual operating and maintenance costs and the capital costs for five HLW Waste Solidification Alternates were developed. The annual operating and maintenance cost estimates included the cost of labor, consumables, utilities, shipping casks, shipping and disposal at a federal repository. The capital cost included the cost of the component, installation and building. The differential cost estimates do not include equipment and facilities which are either shared with the reprocessing facility or are common between all of the alternates. Total annual cost differential between the five waste form alternates is summarized in tabular form. The Borosilicate Glass Alternate has the lowest total annual cost. The other alternates have higher costs which range from $6.6 M to $7.4 M per year higher than the Glass alternate with the Supercalcine being the highest cost at $7.4 M per year differential. The major items in the cost estimates are then disposal costs in the operating cost estimates and the HLW Storage Tanks in the capital cost estimates. The Supercalcine Multibarrier Alternate ships 180 canisters per year more than the other alternates and consequently has a significantly higher operating cost. However, off-setting this the Supercalcine Multibarrier Alternate does not require HLW Storage Tanks for decay because of the high heat conductivity of this product and correspondingly the capital cost for this alternate is significantly lower than the other alternates. The radiological risk values are correlated with the cost evaluation normalized to cost ($)/MWe-yr

  18. A primer on high-throughput computing for genomic selection.

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  19. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  20. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  1. Symbolic computation and its application to high energy physics

    International Nuclear Information System (INIS)

    Hearn, A.C.

    1981-01-01

    It is clear that we are in the middle of an electronic revolution whose effect will be as profound as the industrial revolution. The continuing advances in computing technology will provide us with devices which will make present day computers appear primitive. In this environment, the algebraic and other non-mumerical capabilities of such devices will become increasingly important. These lectures will review the present state of the field of algebraic computation and its potential for problem solving in high energy physics and related areas. We shall begin with a brief description of the available systems and examine the data objects which they consider. As an example of the facilities which these systems can offer, we shall then consider the problem of analytic integration, since this is so fundamental to many of the calculational techniques used by high energy physicists. Finally, we shall study the implications which the current developments in hardware technology hold for scientific problem solving. (orig.)

  2. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  3. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  4. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  5. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  6. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  7. Manufacturing High-Quality Carbon Nanotubes at Lower Cost

    Science.gov (United States)

    Benavides, Jeanette M.; Lidecker, Henning

    2004-01-01

    A modified electric-arc welding process has been developed for manufacturing high-quality batches of carbon nanotubes at relatively low cost. Unlike in some other processes for making carbon nanotubes, metal catalysts are not used and, consequently, it is not necessary to perform extensive cleaning and purification. Also, unlike some other processes, this process is carried out at atmospheric pressure under a hood instead of in a closed, pressurized chamber; as a result, the present process can be implemented more easily. Although the present welding-based process includes an electric arc, it differs from a prior electric-arc nanotube-production process. The welding equipment used in this process includes an AC/DC welding power source with an integral helium-gas delivery system and circulating water for cooling an assembly that holds one of the welding electrodes (in this case, the anode). The cathode is a hollow carbon (optionally, graphite) rod having an outside diameter of 2 in. (approximately equal to 5.1 cm) and an inside diameter of 5/8 in. (approximately equal to 1.6 cm). The cathode is partly immersed in a water bath, such that it protrudes about 2 in. (about 5.1 cm) above the surface of the water. The bottom end of the cathode is held underwater by a clamp, to which is connected the grounding cable of the welding power source. The anode is a carbon rod 1/8 in. (approximately equal to 0.3 cm) in diameter. The assembly that holds the anode includes a thumbknob- driven mechanism for controlling the height of the anode. A small hood is placed over the anode to direct a flow of helium downward from the anode to the cathode during the welding process. A bell-shaped exhaust hood collects the helium and other gases from the process. During the process, as the anode is consumed, the height of the anode is adjusted to maintain an anode-to-cathode gap of 1 mm. The arc-welding process is continued until the upper end of the anode has been lowered to a specified height

  8. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  9. Can We Build a Truly High Performance Computer Which is Flexible and Transparent?

    KAUST Repository

    Rojas, Jhonathan Prieto

    2013-09-10

    State-of-the art computers need high performance transistors, which consume ultra-low power resulting in longer battery lifetime. Billions of transistors are integrated neatly using matured silicon fabrication process to maintain the performance per cost advantage. In that context, low-cost mono-crystalline bulk silicon (100) based high performance transistors are considered as the heart of today\\'s computers. One limitation is silicon\\'s rigidity and brittleness. Here we show a generic batch process to convert high performance silicon electronics into flexible and semi-transparent one while retaining its performance, process compatibility, integration density and cost. We demonstrate high-k/metal gate stack based p-type metal oxide semiconductor field effect transistors on 4 inch silicon fabric released from bulk silicon (100) wafers with sub-threshold swing of 80 mV dec(-1) and on/off ratio of near 10(4) within 10% device uniformity with a minimum bending radius of 5 mm and an average transmittance of similar to 7% in the visible spectrum.

  10. Association of prescription abandonment with cost share for high-cost specialty pharmacy medications.

    Science.gov (United States)

    Gleason, Patrick P; Starner, Catherine I; Gunderson, Brent W; Schafer, Jeremy A; Sarran, H Scott

    2009-10-01

    In 2008, specialty medications accounted for 15.1% of total pharmacy benefit medication spending, and per member expenditures have increased by 11.1% annually from 2004 to 2008 within a commercially insured population of 8 million members. Insurers face increasing pressure to control specialty medication expenditures and to rely on increasing member cost share through creation of a fourth copayment tier within the incentive-based formulary pharmacy benefit system. Data are needed on the influence that member out-of-pocket (OOP) expense may have on prescription abandonment (defined as the patient never actually taking possession of the medication despite evidence of a written prescription generated by a prescriber). To explore the relationship between prescription abandonment and OOP expense among individuals newly initiating high-cost medication therapy with a tumor necrosis factor (TNF) blocker or multiple sclerosis (MS) biologic agent. This observational cross-sectional study queried a midwestern and southern U.S. database of 13,172,480 commercially insured individuals to find members with a pharmacy benefit-adjudicated claim for a TNF blocker or MS specialty medication during the period from July 2006 through June 2008. Prescription abandonment was assessed among continuously enrolled members newly initiating TNF blocker or MS therapy. Prescription abandonment was defined as reversal of the adjudicated claim with no evidence of a subsequent additional adjudicated paid claim in the ensuing 90 days. Separate analyses for MS and TNF blocker therapy were performed to assess the association between member OOP expense and abandonment rate using the Cochran-Armitage test for trend and multivariate logistic regression. Members were placed into 1 of the 7 following OOP expense groups per claim: $0-$100, $101-$150, $151-$200, $201-$250, $251-$350, $351-$500, or more than $500. The association of MS or TNF blocker abandonment rate with OOP expense was tested with logistic

  11. Nested Interrupt Analysis of Low Cost and High Performance Embedded Systems Using GSPN Framework

    Science.gov (United States)

    Lin, Cheng-Min

    Interrupt service routines are a key technology for embedded systems. In this paper, we introduce the standard approach for using Generalized Stochastic Petri Nets (GSPNs) as a high-level model for generating CTMC Continuous-Time Markov Chains (CTMCs) and then use Markov Reward Models (MRMs) to compute the performance for embedded systems. This framework is employed to analyze two embedded controllers with low cost and high performance, ARM7 and Cortex-M3. Cortex-M3 is designed with a tail-chaining mechanism to improve the performance of ARM7 when a nested interrupt occurs on an embedded controller. The Platform Independent Petri net Editor 2 (PIPE2) tool is used to model and evaluate the controllers in terms of power consumption and interrupt overhead performance. Using numerical results, in spite of the power consumption or interrupt overhead, Cortex-M3 performs better than ARM7.

  12. Some selection criteria for computers in real-time systems for high energy physics

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1980-01-01

    The right choice of program source is for the organization of real-time systems of great importance as cost and reliability are decisive factors. Some selection criteria for program sources for high energy physics multiwire chamber spectrometers (MWCS) are considered in this report. MWCS's accept bits of information from event pattens. Large and small computers, microcomputers and intelligent controllers in CAMAC crates are compared with respect to the following characteristics: data exchange speed, number of addresses for peripheral devices, cost of interfacing a peripheral device, sizes of buffer and mass memory, configuration costs, and the mean time between failures (MTBF). The results of comparisons are shown by plots and histograms which allow the selection of program sources according to the above criteria. (Auth.)

  13. Design of Low Cost, Highly Adsorbent Activated Carbon Fibers

    National Research Council Canada - National Science Library

    Mangun, Christian

    2003-01-01

    .... EKOS has developed a novel activated carbon fiber - (ACF) that combines the low cost and durability of GAC with tailored pore size and pore surface chemistry for improved defense against chemical agents...

  14. A Low-Cost, High-Precision Navigator, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Toyon Research Corporation proposes to develop and demonstrate a prototype low-cost precision navigation system using commercial-grade gyroscopes and accelerometers....

  15. A high-performance, low-cost, leading edge discriminator

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 65; Issue 2 ... commercial discriminators. A low-cost discriminator is an essential requirement of the GRAPES-3 experiment where a large number of discriminator channels are used.

  16. The high cost of clinical negligence litigation in the NHS.

    Science.gov (United States)

    Tingle, John

    2017-03-09

    John Tingle, Reader in Health Law at Nottingham Trent University, discusses a consultation document from the Department of Health on introducing fixed recoverable costs in lower-value clinical negligence claims.

  17. Computation of high Reynolds number internal/external flows

    Science.gov (United States)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  18. Computation of high Reynolds number internal/external flows

    International Nuclear Information System (INIS)

    Cline, M.C.; Wilmoth, R.G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented

  19. 2003 Conference for Computing in High Energy and Nuclear Physics

    International Nuclear Information System (INIS)

    Schalk, T.

    2003-01-01

    The conference was subdivided into the follow separate tracks. Electronic presentations and/or videos are provided on the main website link. Sessions: Plenary Talks and Panel Discussion; Grid Architecture, Infrastructure, and Grid Security; HENP Grid Applications, Testbeds, and Demonstrations; HENP Computing Systems and Infrastructure; Monitoring; High Performance Networking; Data Acquisition, Triggers and Controls; First Level Triggers and Trigger Hardware; Lattice Gauge Computing; HENP Software Architecture and Software Engineering; Data Management and Persistency; Data Analysis Environment and Visualization; Simulation and Modeling; and Collaboration Tools and Information Systems

  20. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  1. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  2. Nuclear forces and high-performance computing: The perfect match

    International Nuclear Information System (INIS)

    Luu, T; Walker-Loud, A

    2009-01-01

    High-performance computing is now enabling the calculation of certain hadronic interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. In this paper we briefly describe the state of the field and show how other aspects of hadronic interactions will be ascertained in the near future. We give estimates of computational requirements needed to obtain these goals, and outline a procedure for incorporating these results into the broader nuclear physics community.

  3. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  4. The cost-effectiveness of the RSI QuickScan intervention programme for computer workers: Results of an economic evaluation alongside a randomised controlled trial.

    Science.gov (United States)

    Speklé, Erwin M; Heinrich, Judith; Hoozemans, Marco J M; Blatter, Birgitte M; van der Beek, Allard J; van Dieën, Jaap H; van Tulder, Maurits W

    2010-11-11

    The costs of arm, shoulder and neck symptoms are high. In order to decrease these costs employers implement interventions aimed at reducing these symptoms. One frequently used intervention is the RSI QuickScan intervention programme. It establishes a risk profile of the target population and subsequently advises interventions following a decision tree based on that risk profile. The purpose of this study was to perform an economic evaluation, from both the societal and companies' perspective, of the RSI QuickScan intervention programme for computer workers. In this study, effectiveness was defined at three levels: exposure to risk factors, prevalence of arm, shoulder and neck symptoms, and days of sick leave. The economic evaluation was conducted alongside a randomised controlled trial (RCT). Participating computer workers from 7 companies (N = 638) were assigned to either the intervention group (N = 320) or the usual care group (N = 318) by means of cluster randomisation (N = 50). The intervention consisted of a tailor-made programme, based on a previously established risk profile. At baseline, 6 and 12 month follow-up, the participants completed the RSI QuickScan questionnaire. Analyses to estimate the effect of the intervention were done according to the intention-to-treat principle. To compare costs between groups, confidence intervals for cost differences were computed by bias-corrected and accelerated bootstrapping. The mean intervention costs, paid by the employer, were 59 euro per participant in the intervention and 28 euro in the usual care group. Mean total health care and non-health care costs per participant were 108 euro in both groups. As to the cost-effectiveness, improvement in received information on healthy computer use as well as in their work posture and movement was observed at higher costs. With regard to the other risk factors, symptoms and sick leave, only small and non-significant effects were found. In this study, the RSI Quick

  5. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  6. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  7. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  8. Providing a computing environment for a high energy physics workshop

    International Nuclear Information System (INIS)

    Nicholls, J.

    1991-03-01

    Although computing facilities have been provided at conferences and workshops remote from the hose institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail over leased lines. This presentation describes the pioneering effort involved by the Computing Department/Division at Fermilab in providing a local computing facility with world-wide networking capability for the Physics at Fermilab in the 1990's workshop held in Breckenridge, Colorado, in August 1989, as well as the enhanced facilities provided for the 1990 Summer Study on High Energy Physics at Snowmass, Colorado, in June/July 1990. Issues discussed include type and sizing of the facilities, advance preparations, shipping, on-site support, as well as an evaluation of the value of the facility to the workshop participants

  9. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  10. Cost-effectiveness analysis of online hemodiafiltration versus high-flux hemodialysis

    Directory of Open Access Journals (Sweden)

    Ramponi F

    2016-09-01

    Full Text Available Francesco Ramponi,1,2 Claudio Ronco,1,3 Giacomo Mason,1 Enrico Rettore,4 Daniele Marcelli,5,6 Francesca Martino,1,3 Mauro Neri,1,7 Alejandro Martin-Malo,8 Bernard Canaud,5,9 Francesco Locatelli10 1International Renal Research Institute (IRRIV, San Bortolo Hospital, Vicenza, 2Department of Economics and Management, University of Padova, Padova, 3Department of Nephrology, San Bortolo Hospital, Vicenza, 4Department of Sociology and Social Research, University of Trento, FBK-IRVAPP & IZA, Trento, Italy; 5Europe, Middle East, Africa and Latin America Medical Board, Fresenius Medical Care,, Bad Homburg, Germany; 6Danube University, Krems, Austria; 7Department of Management and Engineering, University of Padova, Vicenza, Italy; 8Nephrology Unit, Reina Sofia University Hospital, Córdoba, Spain; 9School of Medicine, Montpellier University, Montpellier, France; 10Department of Nephrology, Manzoni Hospital, Lecco, Italy Background: Clinical studies suggest that hemodiafiltration (HDF may lead to better clinical outcomes than high-flux hemodialysis (HF-HD, but concerns have been raised about the cost-effectiveness of HDF versus HF-HD. Aim of this study was to investigate whether clinical benefits, in terms of longer survival and better health-related quality of life, are worth the possibly higher costs of HDF compared to HF-HD.Methods: The analysis comprised a simulation based on the combined results of previous published studies, with the following steps: 1 estimation of the survival function of HF-HD patients from a clinical trial and of HDF patients using the risk reduction estimated in a meta-analysis; 2 simulation of the survival of the same sample of patients as if allocated to HF-HD or HDF using three-state Markov models; and 3 application of state-specific health-related quality of life coefficients and differential costs derived from the literature. Several Monte Carlo simulations were performed, including simulations for patients with different

  11. Assessing the high costs of new nuclear power plants

    International Nuclear Information System (INIS)

    Komanoff, C.

    1984-01-01

    The variation in nuclear plant capital costs, both over time and within the current generation of plants, is considerable and is one of the impressive facts associated with that technology. This article concerns statistical methods for determining relative management efficiency or inefficiency in nuclear plant construction. It emphasizes the need to adjust raw cost data for important variables in order to make fair comparisons among disparate projects. The analysis identifies the costliest and least-costly projects and elucidates trends that helped or harmed several or more projects at the same time. Its findings can form a supplement and guide for engineering and management audits of individual nuclear projects. 5 references, 1 figure, 1 table

  12. Alternative ceramic circuit constructions for low cost, high reliability applications

    International Nuclear Information System (INIS)

    Modes, Ch.; O'Neil, M.

    1997-01-01

    The growth in the use of hybrid circuit technology has recently been challenged by recent advances in low cost laminate technology, as well as the continued integration of functions into IC's. Size reduction of hybrid 'packages' has turned out to be a means to extend the useful life of this technology. The suppliers of thick film materials technology have responded to this challenge by developing a number of technology options to reduce circuit size, increase density, and reduce overall cost, while maintaining or increasing reliability. This paper provides an overview of the processes that have been developed, and, in many cases are used widely to produce low cost, reliable microcircuits. Comparisons of each of these circuit fabrication processes are made with a discussion of advantages and disadvantages of each technology. (author)

  13. Low-Dose Chest Computed Tomography for Lung Cancer Screening Among Hodgkin Lymphoma Survivors: A Cost-Effectiveness Analysis

    International Nuclear Information System (INIS)

    Wattson, Daniel A.; Hunink, M.G. Myriam; DiPiro, Pamela J.; Das, Prajnan; Hodgson, David C.; Mauch, Peter M.; Ng, Andrea K.

    2014-01-01

    Purpose: Hodgkin lymphoma (HL) survivors face an increased risk of treatment-related lung cancer. Screening with low-dose computed tomography (LDCT) may allow detection of early stage, resectable cancers. We developed a Markov decision-analytic and cost-effectiveness model to estimate the merits of annual LDCT screening among HL survivors. Methods and Materials: Population databases and HL-specific literature informed key model parameters, including lung cancer rates and stage distribution, cause-specific survival estimates, and utilities. Relative risks accounted for radiation therapy (RT) technique, smoking status (>10 pack-years or current smokers vs not), age at HL diagnosis, time from HL treatment, and excess radiation from LDCTs. LDCT assumptions, including expected stage-shift, false-positive rates, and likely additional workup were derived from the National Lung Screening Trial and preliminary results from an internal phase 2 protocol that performed annual LDCTs in 53 HL survivors. We assumed a 3% discount rate and a willingness-to-pay (WTP) threshold of $50,000 per quality-adjusted life year (QALY). Results: Annual LDCT screening was cost effective for all smokers. A male smoker treated with mantle RT at age 25 achieved maximum QALYs by initiating screening 12 years post-HL, with a life expectancy benefit of 2.1 months and an incremental cost of $34,841/QALY. Among nonsmokers, annual screening produced a QALY benefit in some cases, but the incremental cost was not below the WTP threshold for any patient subsets. As age at HL diagnosis increased, earlier initiation of screening improved outcomes. Sensitivity analyses revealed that the model was most sensitive to the lung cancer incidence and mortality rates and expected stage-shift from screening. Conclusions: HL survivors are an important high-risk population that may benefit from screening, especially those treated in the past with large radiation fields including mantle or involved-field RT. Screening

  14. Low-Dose Chest Computed Tomography for Lung Cancer Screening Among Hodgkin Lymphoma Survivors: A Cost-Effectiveness Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wattson, Daniel A., E-mail: dwattson@partners.org [Harvard Radiation Oncology Program, Boston, Massachusetts (United States); Hunink, M.G. Myriam [Departments of Radiology and Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands and Center for Health Decision Science, Harvard School of Public Health, Boston, Massachusetts (United States); DiPiro, Pamela J. [Department of Imaging, Dana-Farber Cancer Institute, Boston, Massachusetts (United States); Das, Prajnan [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Hodgson, David C. [Department of Radiation Oncology, University of Toronto, Toronto, Ontario (Canada); Mauch, Peter M.; Ng, Andrea K. [Department of Radiation Oncology, Brigham and Women' s Hospital and Dana-Farber Cancer Institute, Boston, Massachusetts (United States)

    2014-10-01

    Purpose: Hodgkin lymphoma (HL) survivors face an increased risk of treatment-related lung cancer. Screening with low-dose computed tomography (LDCT) may allow detection of early stage, resectable cancers. We developed a Markov decision-analytic and cost-effectiveness model to estimate the merits of annual LDCT screening among HL survivors. Methods and Materials: Population databases and HL-specific literature informed key model parameters, including lung cancer rates and stage distribution, cause-specific survival estimates, and utilities. Relative risks accounted for radiation therapy (RT) technique, smoking status (>10 pack-years or current smokers vs not), age at HL diagnosis, time from HL treatment, and excess radiation from LDCTs. LDCT assumptions, including expected stage-shift, false-positive rates, and likely additional workup were derived from the National Lung Screening Trial and preliminary results from an internal phase 2 protocol that performed annual LDCTs in 53 HL survivors. We assumed a 3% discount rate and a willingness-to-pay (WTP) threshold of $50,000 per quality-adjusted life year (QALY). Results: Annual LDCT screening was cost effective for all smokers. A male smoker treated with mantle RT at age 25 achieved maximum QALYs by initiating screening 12 years post-HL, with a life expectancy benefit of 2.1 months and an incremental cost of $34,841/QALY. Among nonsmokers, annual screening produced a QALY benefit in some cases, but the incremental cost was not below the WTP threshold for any patient subsets. As age at HL diagnosis increased, earlier initiation of screening improved outcomes. Sensitivity analyses revealed that the model was most sensitive to the lung cancer incidence and mortality rates and expected stage-shift from screening. Conclusions: HL survivors are an important high-risk population that may benefit from screening, especially those treated in the past with large radiation fields including mantle or involved-field RT. Screening

  15. Low-dose chest computed tomography for lung cancer screening among Hodgkin lymphoma survivors: a cost-effectiveness analysis.

    Science.gov (United States)

    Wattson, Daniel A; Hunink, M G Myriam; DiPiro, Pamela J; Das, Prajnan; Hodgson, David C; Mauch, Peter M; Ng, Andrea K

    2014-10-01

    Hodgkin lymphoma (HL) survivors face an increased risk of treatment-related lung cancer. Screening with low-dose computed tomography (LDCT) may allow detection of early stage, resectable cancers. We developed a Markov decision-analytic and cost-effectiveness model to estimate the merits of annual LDCT screening among HL survivors. Population databases and HL-specific literature informed key model parameters, including lung cancer rates and stage distribution, cause-specific survival estimates, and utilities. Relative risks accounted for radiation therapy (RT) technique, smoking status (>10 pack-years or current smokers vs not), age at HL diagnosis, time from HL treatment, and excess radiation from LDCTs. LDCT assumptions, including expected stage-shift, false-positive rates, and likely additional workup were derived from the National Lung Screening Trial and preliminary results from an internal phase 2 protocol that performed annual LDCTs in 53 HL survivors. We assumed a 3% discount rate and a willingness-to-pay (WTP) threshold of $50,000 per quality-adjusted life year (QALY). Annual LDCT screening was cost effective for all smokers. A male smoker treated with mantle RT at age 25 achieved maximum QALYs by initiating screening 12 years post-HL, with a life expectancy benefit of 2.1 months and an incremental cost of $34,841/QALY. Among nonsmokers, annual screening produced a QALY benefit in some cases, but the incremental cost was not below the WTP threshold for any patient subsets. As age at HL diagnosis increased, earlier initiation of screening improved outcomes. Sensitivity analyses revealed that the model was most sensitive to the lung cancer incidence and mortality rates and expected stage-shift from screening. HL survivors are an important high-risk population that may benefit from screening, especially those treated in the past with large radiation fields including mantle or involved-field RT. Screening may be cost effective for all smokers but possibly not

  16. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  17. The High Direct Medical Costs of Prader-Willi Syndrome.

    Science.gov (United States)

    Shoffstall, Andrew J; Gaebler, Julia A; Kreher, Nerissa C; Niecko, Timothy; Douglas, Diah; Strong, Theresa V; Miller, Jennifer L; Stafford, Diane E; Butler, Merlin G

    2016-08-01

    To assess medical resource utilization associated with Prader-Willi syndrome (PWS) in the US, hypothesized to be greater relative to a matched control group without PWS. We used a retrospective case-matched control design and longitudinal US administrative claims data (MarketScan) during a 5-year enrollment period (2009-2014). Patients with PWS were identified by Classification of Diseases, Ninth Revision, Clinical Modification diagnosis code 759.81. Controls were matched on age, sex, and payer type. Outcomes included total, outpatient, inpatient and prescription costs. After matching and application of inclusion/exclusion criteria, we identified 2030 patients with PWS (1161 commercial, 38 Medicare supplemental, and 831 Medicaid). Commercially insured patients with PWS (median age 10 years) had 8.8-times greater total annual direct medical costs than their counterparts without PWS (median age 10 years: median costs $14 907 vs $819; P < .0001; mean costs: $28 712 vs $3246). Outpatient care comprised the largest portion of medical resource utilization for enrollees with and without PWS (median $5605 vs $675; P < .0001; mean $11 032 vs $1804), followed by mean annual inpatient and medication costs, which were $10 879 vs $1015 (P < .001) and $6801 vs $428 (P < .001), respectively. Total annual direct medical costs were ∼42% greater for Medicaid-insured patients with PWS than their commercially insured counterparts, an increase partly explained by claims for Medicaid Waiver day and residential habilitation. Direct medical resource utilization was considerably greater among patients with PWS than members without the condition. This study provides a first step toward quantifying the financial burden of PWS posed to individuals, families, and society. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  19. Unravelling the structure of matter on high-performance computers

    International Nuclear Information System (INIS)

    Kieu, T.D.; McKellar, B.H.J.

    1992-11-01

    The various phenomena and the different forms of matter in nature are believed to be the manifestation of only a handful set of fundamental building blocks-the elementary particles-which interact through the four fundamental forces. In the study of the structure of matter at this level one has to consider forces which are not sufficiently weak to be treated as small perturbations to the system, an example of which is the strong force that binds the nucleons together. High-performance computers, both vector and parallel machines, have facilitated the necessary non-perturbative treatments. The principles and the techniques of computer simulations applied to Quantum Chromodynamics are explained examples include the strong interactions, the calculation of the mass of nucleons and their decay rates. Some commercial and special-purpose high-performance machines for such calculations are also mentioned. 3 refs., 2 tabs

  20. Low Computational-Cost Footprint Deformities Diagnosis Sensor through Angles, Dimensions Analysis and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    J. Rodolfo Maestre-Rendon

    2017-11-01

    Full Text Available Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920 connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat.

  1. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  2. Aspects of pulmonary histiocytosis X on high resolution computed tomography

    International Nuclear Information System (INIS)

    Costa, N.S.S.; Castro Lessa Angela, M.T. de; Angelo Junior, J.R.L.; Silva, F.M.D.; Kavakama, J.; Carvalho, C.R.R. de; Cerri, G.G.

    1995-01-01

    Pulmonary histiocytosis X is a disease that occurs in young adults and presents with nodules and cysts, mainly in upper lobes, with consequent pulmonary fibrosis. These pulmonary changes are virtually pathognomonic findings on high resolution computed tomography, that allows estimate the area of the lung involved and distinguish histiocytosis X from other disorders that also produces nodules and cysts. (author). 10 refs, 2 tabs, 6 figs

  3. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  4. Many Mobile Health Apps Target High-Need, High-Cost Populations, But Gaps Remain.

    Science.gov (United States)

    Singh, Karandeep; Drouin, Kaitlin; Newmark, Lisa P; Lee, JaeHo; Faxvaag, Arild; Rozenblum, Ronen; Pabo, Erika A; Landman, Adam; Klinger, Elissa; Bates, David W

    2016-12-01

    With rising smartphone ownership, mobile health applications (mHealth apps) have the potential to support high-need, high-cost populations in managing their health. While the number of available mHealth apps has grown substantially, no clear strategy has emerged on how providers should evaluate and recommend such apps to patients. Key stakeholders, including medical professional societies, insurers, and policy makers, have largely avoided formally recommending apps, which forces patients to obtain recommendations from other sources. To help stakeholders overcome barriers to reviewing and recommending apps, we evaluated 137 patient-facing mHealth apps-those intended for use by patients to manage their health-that were highly rated by consumers and recommended by experts and that targeted high-need, high-cost populations. We found that there is a wide variety of apps in the marketplace but that few apps address the needs of the patients who could benefit the most. We also found that consumers' ratings were poor indications of apps' clinical utility or usability and that most apps did not respond appropriately when a user entered potentially dangerous health information. Going forward, data privacy and security will continue to be major concerns in the dissemination of mHealth apps. Project HOPE—The People-to-People Health Foundation, Inc.

  5. The High Cost of Harsh Discipline and Its Disparate Impact

    Science.gov (United States)

    Rumberger, Russell W.; Losen, Daniel J.

    2016-01-01

    School suspension rates have been rising since the early 1970s, especially for children of color. One body of research has demonstrated that suspension from school is harmful to students, as it increases the risk of retention and school dropout. Another has demonstrated that school dropouts impose huge social costs on their states and localities,…

  6. Distribution Grid Integration Costs Under High PV Penetrations Workshop |

    Science.gov (United States)

    utility business model and structure: policies and regulations, revenue requirements and investment Practices Panel 3: Future Directions in Grid Integration Cost-Benefit Analysis Determining Distribution Grid into Utility Planning Notes on Future Needs All speakers were asked to include their opinions on

  7. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  8. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  9. High-Precision Computation: Mathematical Physics and Dynamics

    International Nuclear Information System (INIS)

    Bailey, D.H.; Barrio, R.; Borwein, J.M.

    2010-01-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  10. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  11. A Low-Cost Time-Hopping Impulse Radio System for High Data Rate Transmission

    Directory of Open Access Journals (Sweden)

    Jinyun Zhang

    2005-03-01

    Full Text Available We present an efficient, low-cost implementation of time-hopping impulse radio that fulfills the spectral mask mandated by the FCC and is suitable for high-data-rate, short-range communications. Key features are (i all-baseband implementation that obviates the need for passband components, (ii symbol-rate (not chip rate sampling, A/D conversion, and digital signal processing, (iii fast acquisition due to novel search algorithms, and (iv spectral shaping that can be adapted to accommodate different spectrum regulations and interference environments. Computer simulations show that this system can provide 110 Mbps at 7–10 m distance, as well as higher data rates at shorter distances under FCC emissions limits. Due to the spreading concept of time-hopping impulse radio, the system can sustain multiple simultaneous users, and can suppress narrowband interference effectively.

  12. A scalable-low cost architecture for high gain beamforming antennas

    KAUST Repository

    Bakr, Omar

    2010-10-01

    Many state-of-the-art wireless systems, such as long distance mesh networks and high bandwidth networks using mm-wave frequencies, require high gain antennas to overcome adverse channel conditions. These networks could be greatly aided by adaptive beamforming antenna arrays, which can significantly simplify the installation and maintenance costs (e.g., by enabling automatic beam alignment). However, building large, low cost beamforming arrays is very complicated. In this paper, we examine the main challenges presented by large arrays, starting from electromagnetic and antenna design and proceeding to the signal processing and algorithms domain. We propose 3-dimensional antenna structures and hybrid RF/digital radio architectures that can significantly reduce the complexity and improve the power efficiency of adaptive array systems. We also present signal processing techniques based on adaptive filtering methods that enhance the robustness of these architectures. Finally, we present computationally efficient vector quantization techniques that significantly improve the interference cancellation capabilities of analog beamforming architectures. © 2010 IEEE.

  13. A scalable-low cost architecture for high gain beamforming antennas

    KAUST Repository

    Bakr, Omar; Johnson, Mark; Jungdong Park,; Adabi, Ehsan; Jones, Kevin; Niknejad, Ali

    2010-01-01

    Many state-of-the-art wireless systems, such as long distance mesh networks and high bandwidth networks using mm-wave frequencies, require high gain antennas to overcome adverse channel conditions. These networks could be greatly aided by adaptive beamforming antenna arrays, which can significantly simplify the installation and maintenance costs (e.g., by enabling automatic beam alignment). However, building large, low cost beamforming arrays is very complicated. In this paper, we examine the main challenges presented by large arrays, starting from electromagnetic and antenna design and proceeding to the signal processing and algorithms domain. We propose 3-dimensional antenna structures and hybrid RF/digital radio architectures that can significantly reduce the complexity and improve the power efficiency of adaptive array systems. We also present signal processing techniques based on adaptive filtering methods that enhance the robustness of these architectures. Finally, we present computationally efficient vector quantization techniques that significantly improve the interference cancellation capabilities of analog beamforming architectures. © 2010 IEEE.

  14. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  15. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    Science.gov (United States)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  16. Polymer waveguides for electro-optical integration in data centers and high-performance computers.

    Science.gov (United States)

    Dangel, Roger; Hofrichter, Jens; Horst, Folkert; Jubin, Daniel; La Porta, Antonio; Meier, Norbert; Soganci, Ibrahim Murat; Weiss, Jonas; Offrein, Bert Jan

    2015-02-23

    To satisfy the intra- and inter-system bandwidth requirements of future data centers and high-performance computers, low-cost low-power high-throughput optical interconnects will become a key enabling technology. To tightly integrate optics with the computing hardware, particularly in the context of CMOS-compatible silicon photonics, optical printed circuit boards using polymer waveguides are considered as a formidable platform. IBM Research has already demonstrated the essential silicon photonics and interconnection building blocks. A remaining challenge is electro-optical packaging, i.e., the connection of the silicon photonics chips with the system. In this paper, we present a new single-mode polymer waveguide technology and a scalable method for building the optical interface between silicon photonics chips and single-mode polymer waveguides.

  17. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  18. What Physicists Should Know About High Performance Computing - Circa 2002

    Science.gov (United States)

    Frederick, Donald

    2002-08-01

    High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.

  19. Costs and clinical outcomes in individuals without known coronary artery disease undergoing coronary computed tomographic angiography from an analysis of Medicare category III transaction codes.

    Science.gov (United States)

    Min, James K; Shaw, Leslee J; Berman, Daniel S; Gilmore, Amanda; Kang, Ning

    2008-09-15

    Multidetector coronary computed tomographic angiography (CCTA) demonstrates high accuracy for the detection and exclusion of coronary artery disease (CAD) and predicts adverse prognosis. To date, opportunity costs relating the clinical and economic outcomes of CCTA compared with other methods of diagnosing CAD, such as myocardial perfusion single-photon emission computed tomography (SPECT), remain unknown. An observational, multicenter, patient-level analysis of patients without known CAD who underwent CCTA or SPECT was performed. Patients who underwent CCTA (n = 1,938) were matched to those who underwent SPECT (n = 7,752) on 8 demographic and clinical characteristics and 2 summary measures of cardiac medications and co-morbidities and were evaluated for 9-month expenditures and clinical outcomes. Adjusted total health care and CAD expenditures were 27% (p cost-efficient alternative to SPECT for the initial coronary evaluation of patients without known CAD.

  20. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  1. Beat-ID: Towards a computationally low-cost single heartbeat biometric identity check system based on electrocardiogram wave morphology

    Science.gov (United States)

    Paiva, Joana S.; Dias, Duarte

    2017-01-01

    In recent years, safer and more reliable biometric methods have been developed. Apart from the need for enhanced security, the media and entertainment sectors have also been applying biometrics in the emerging market of user-adaptable objects/systems to make these systems more user-friendly. However, the complexity of some state-of-the-art biometric systems (e.g., iris recognition) or their high false rejection rate (e.g., fingerprint recognition) is neither compatible with the simple hardware architecture required by reduced-size devices nor the new trend of implementing smart objects within the dynamic market of the Internet of Things (IoT). It was recently shown that an individual can be recognized by extracting features from their electrocardiogram (ECG). However, most current ECG-based biometric algorithms are computationally demanding and/or rely on relatively large (several seconds) ECG samples, which are incompatible with the aforementioned application fields. Here, we present a computationally low-cost method (patent pending), including simple mathematical operations, for identifying a person using only three ECG morphology-based characteristics from a single heartbeat. The algorithm was trained/tested using ECG signals of different duration from the Physionet database on more than 60 different training/test datasets. The proposed method achieved maximal averaged accuracy of 97.450% in distinguishing each subject from a ten-subject set and false acceptance and rejection rates (FAR and FRR) of 5.710±1.900% and 3.440±1.980%, respectively, placing Beat-ID in a very competitive position in terms of the FRR/FAR among state-of-the-art methods. Furthermore, the proposed method can identify a person using an average of 1.020 heartbeats. It therefore has FRR/FAR behavior similar to obtaining a fingerprint, yet it is simpler and requires less expensive hardware. This method targets low-computational/energy-cost scenarios, such as tiny wearable devices (e.g., a

  2. A Low-Cost Computer-Controlled Arduino-Based Educational Laboratory System for Teaching the Fundamentals of Photovoltaic Cells

    Science.gov (United States)

    Zachariadou, K.; Yiasemides, K.; Trougkakos, N.

    2012-01-01

    We present a low-cost, fully computer-controlled, Arduino-based, educational laboratory (SolarInsight) to be used in undergraduate university courses concerned with electrical engineering and physics. The major goal of the system is to provide students with the necessary instrumentation, software tools and methodology in order to learn fundamental…

  3. Computing with high-resolution upwind schemes for hyperbolic equations

    International Nuclear Information System (INIS)

    Chakravarthy, S.R.; Osher, S.; California Univ., Los Angeles)

    1985-01-01

    Computational aspects of modern high-resolution upwind finite-difference schemes for hyperbolic systems of conservation laws are examined. An operational unification is demonstrated for constructing a wide class of flux-difference-split and flux-split schemes based on the design principles underlying total variation diminishing (TVD) schemes. Consideration is also given to TVD scheme design by preprocessing, the extension of preprocessing and postprocessing approaches to general control volumes, the removal of expansion shocks and glitches, relaxation methods for implicit TVD schemes, and a new family of high-accuracy TVD schemes. 21 references

  4. High spatial resolution CT image reconstruction using parallel computing

    International Nuclear Information System (INIS)

    Yin Yin; Liu Li; Sun Gongxing

    2003-01-01

    Using the PC cluster system with 16 dual CPU nodes, we accelerate the FBP and OR-OSEM reconstruction of high spatial resolution image (2048 x 2048). Based on the number of projections, we rewrite the reconstruction algorithms into parallel format and dispatch the tasks to each CPU. By parallel computing, the speedup factor is roughly equal to the number of CPUs, which can be up to about 25 times when 25 CPUs used. This technique is very suitable for real-time high spatial resolution CT image reconstruction. (authors)

  5. High fitness costs of climate change-induced camouflage mismatch.

    Science.gov (United States)

    Zimova, Marketa; Mills, L Scott; Nowak, J Joshua

    2016-03-01

    Anthropogenic climate change has created myriad stressors that threaten to cause local extinctions if wild populations fail to adapt to novel conditions. We studied individual and population-level fitness costs of a climate change-induced stressor: camouflage mismatch in seasonally colour molting species confronting decreasing snow cover duration. Based on field measurements of radiocollared snowshoe hares, we found strong selection on coat colour molt phenology, such that animals mismatched with the colour of their background experienced weekly survival decreases up to 7%. In the absence of adaptive response, we show that these mortality costs would result in strong population-level declines by the end of the century. However, natural selection acting on wide individual variation in molt phenology might enable evolutionary adaptation to camouflage mismatch. We conclude that evolutionary rescue will be critical for hares and other colour molting species to keep up with climate change. © 2016 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  6. HTGR high temperature process heat design and cost status report

    International Nuclear Information System (INIS)

    1981-12-01

    This report describes the status of the studies conducted on the 850 0 C ROT indirect cycle and the 950 0 C ROT direct cycle through the end of Fiscal Year 1981. Volume I provides summaries of the design and optimization studies and the resulting capital and product costs, for the HTGR/thermochemical pipeline concept. Additionally, preliminary evaluations are presented for coupling of candidate process applications to the HTGR system

  7. Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing

    Science.gov (United States)

    Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.

    2014-12-01

    After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.

  8. Capital and operating cost estimates for high temperature superconducting magnetic energy storage

    International Nuclear Information System (INIS)

    Schoenung, S.M.; Meier, W.R.; Fagaly, R.L.; Heiberger, M.; Stephens, R.B.; Leuer, J.A.; Guzman, R.A.

    1992-01-01

    Capital and operating costs have been estimated for mid-scale (2 to 200 Mwh) superconducting magnetic energy storage (SMES) designed to use high temperature superconductors (HTS). Capital costs are dominated by the cost of superconducting materials. Operating costs, primarily for regeneration, are significantly reduced for HTS-SMES in comparison to low temperature, conventional systems. This cost component is small compared to other O and M and capital components, when levelized annual costs are projected. In this paper, the developments required for HTS-SMES feasibility are discussed

  9. Cost-Effectiveness Analysis in Practice: Interventions to Improve High School Completion

    Science.gov (United States)

    Hollands, Fiona; Bowden, A. Brooks; Belfield, Clive; Levin, Henry M.; Cheng, Henan; Shand, Robert; Pan, Yilin; Hanisch-Cerda, Barbara

    2014-01-01

    In this article, we perform cost-effectiveness analysis on interventions that improve the rate of high school completion. Using the What Works Clearinghouse to select effective interventions, we calculate cost-effectiveness ratios for five youth interventions. We document wide variation in cost-effectiveness ratios between programs and between…

  10. CONSTRUCTION OF A DIFFERENTIAL ISOTHERMAL CALORIMETER OF HIGH SENSITIVITY AND LOW COST.

    OpenAIRE

    Trinca, RB; Perles, CE; Volpe, PLO

    2009-01-01

    CONSTRUCTION OF A DIFFERENTIAL ISOTHERMAL CALORIMETER OF HIGH SENSITIVITY AND LOW COST The high cost of sensitivity commercial calorimeters may represent an obstacle for many calorimetric research groups. This work describes (fie construction and calibration of a batch differential heat conduction calorimeter with sample cells volumes of about 400 mu L. The calorimeter was built using two small high sensibility square Peltier thermoelectric sensors and the total cost was estimated to be about...

  11. A Computer Controlled Precision High Pressure Measuring System

    Science.gov (United States)

    Sadana, S.; Yadav, S.; Jha, N.; Gupta, V. K.; Agarwal, R.; Bandyopadhyay, A. K.; Saxena, T. K.

    2011-01-01

    A microcontroller (AT89C51) based electronics has been designed and developed for high precision calibrator based on Digiquartz pressure transducer (DQPT) for the measurement of high hydrostatic pressure up to 275 MPa. The input signal from DQPT is converted into a square wave form and multiplied through frequency multiplier circuit over 10 times to input frequency. This input frequency is multiplied by a factor of ten using phased lock loop. Octal buffer is used to store the calculated frequency, which in turn is fed to microcontroller AT89C51 interfaced with a liquid crystal display for the display of frequency as well as corresponding pressure in user friendly units. The electronics developed is interfaced with a computer using RS232 for automatic data acquisition, computation and storage. The data is acquired by programming in Visual Basic 6.0. This system is interfaced with the PC to make it a computer controlled system. The system is capable of measuring the frequency up to 4 MHz with a resolution of 0.01 Hz and the pressure up to 275 MPa with a resolution of 0.001 MPa within measurement uncertainty of 0.025%. The details on the hardware of the pressure measuring system, associated electronics, software and calibration are discussed in this paper.

  12. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  13. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    Energy Technology Data Exchange (ETDEWEB)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  14. Ultra High Brightness/Low Cost Fiber Coupled Packaging, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — High peak power, high efficiency, high reliability lightweight, low cost QCW laser diode pump modules with up to 1000W of QCW output become possible with nLight's...

  15. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  16. Strength and Reliability of Wood for the Components of Low-cost Wind Turbines: Computational and Experimental Analysis and Applications

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon; Freere, Peter; Sharma, Ranjan

    2009-01-01

    of experiments and computational investigations. Low cost testing machines have been designed, and employed for the systematic analysis of different sorts of Nepali wood, to be used for the wind turbine construction. At the same time, computational micromechanical models of deformation and strength of wood......This paper reports the latest results of the comprehensive program of experimental and computational analysis of strength and reliability of wooden parts of low cost wind turbines. The possibilities of prediction of strength and reliability of different types of wood are studied in the series...... are developed, which should provide the basis for microstructure-based correlating of observable and service properties of wood. Some correlations between microstructure, strength and service properties of wood have been established....

  17. Cost-effectiveness modeling of colorectal cancer: Computed tomography colonography vs colonoscopy or fecal occult blood tests

    International Nuclear Information System (INIS)

    Lucidarme, Olivier; Cadi, Mehdi; Berger, Genevieve; Taieb, Julien; Poynard, Thierry; Grenier, Philippe; Beresniak, Ariel

    2012-01-01

    Objectives: To assess the cost-effectiveness of three colorectal-cancer (CRC) screening strategies in France: fecal-occult-blood tests (FOBT), computed-tomography-colonography (CTC) and optical-colonoscopy (OC). Methods: Ten-year simulation modeling was used to assess a virtual asymptomatic, average-risk population 50–74 years old. Negative OC was repeated 10 years later, and OC positive for advanced or non-advanced adenoma 3 or 5 years later, respectively. FOBT was repeated biennially. Negative CTC was repeated 5 years later. Positive CTC and FOBT led to triennial OC. Total cost and CRC rate after 10 years for each screening strategy and 0–100% adherence rates with 10% increments were computed. Transition probabilities were programmed using distribution ranges to account for uncertainty parameters. Direct medical costs were estimated using the French national health insurance prices. Probabilistic sensitivity analyses used 5000 Monte Carlo simulations generating model outcomes and standard deviations. Results: For a given adherence rate, CTC screening was always the most effective but not the most cost-effective. FOBT was the least effective but most cost-effective strategy. OC was of intermediate efficacy and the least cost-effective strategy. Without screening, treatment of 123 CRC per 10,000 individuals would cost €3,444,000. For 60% adherence, the respective costs of preventing and treating, respectively 49 and 74 FOBT-detected, 73 and 50 CTC-detected and 63 and 60 OC-detected CRC would be €2,810,000, €6,450,000 and €9,340,000. Conclusion: Simulation modeling helped to identify what would be the most effective (CTC) and cost-effective screening (FOBT) strategy in the setting of mass CRC screening in France.

  18. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  19. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  20. What Contributes Most to High Health Care Costs? Health Care Spending in High Resource Patients.

    Science.gov (United States)

    Pritchard, Daryl; Petrilla, Allison; Hallinan, Shawn; Taylor, Donald H; Schabert, Vernon F; Dubois, Robert W

    2016-02-01

    U.S. health care spending nearly doubled in the decade from 2000-2010. Although the pace of increase has moderated recently, the rate of growth of health care costs is expected to be higher than the growth in the economy for the near future. Previous studies have estimated that 5% of patients account for half of all health care costs, while the top 1% of spenders account for over 27% of costs. The distribution of health care expenditures by type of service and the prevalence of particular health conditions for these patients is not clear, and is likely to differ from the overall population. To examine health care spending patterns and what contributes to costs for the top 5% of managed health care users based on total expenditures. This retrospective observational study employed a large administrative claims database analysis of health care claims of managed care enrollees across the full age and care spectrum. Direct health care expenditures were compared during calendar year 2011 by place of service (outpatient, inpatient, and pharmacy), payer type (commercially insured, Medicare Advantage, and Medicaid managed care), and therapy area between the full population and high resource patients (HRP). The mean total expenditure per HRP during calendar year 2011 was $43,104 versus $3,955 per patient for the full population. Treatment of back disorders and osteoarthritis contributed the largest share of expenditures in both HRP and the full study population, while chronic renal failure, heart disease, and some oncology treatments accounted for disproportionately higher expenditures in HRP. The share of overall expenditures attributed to inpatient services was significantly higher for HRP (40.0%) compared with the full population (24.6%), while the share of expenditures attributed to pharmacy (HRP = 18.1%, full = 21.4%) and outpatient services (HRP = 41.9%, full = 54.1%) was reduced. This pattern was observed across payer type. While the use of physician

  1. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  2. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  3. High-reliability computing for the smarter planet

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Graham, Paul; Manuzzato, Andrea; Dehon, Andre

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary

  4. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  5. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  6. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  7. High-throughput computational search for strengthening precipitates in alloys

    International Nuclear Information System (INIS)

    Kirklin, S.; Saal, James E.; Hegde, Vinay I.; Wolverton, C.

    2016-01-01

    The search for high-strength alloys and precipitation hardened systems has largely been accomplished through Edisonian trial and error experimentation. Here, we present a novel strategy using high-throughput computational approaches to search for promising precipitate/alloy systems. We perform density functional theory (DFT) calculations of an extremely large space of ∼200,000 potential compounds in search of effective strengthening precipitates for a variety of different alloy matrices, e.g., Fe, Al, Mg, Ni, Co, and Ti. Our search strategy involves screening phases that are likely to produce coherent precipitates (based on small lattice mismatch) and are composed of relatively common alloying elements. When combined with the Open Quantum Materials Database (OQMD), we can computationally screen for precipitates that either have a stable two-phase equilibrium with the host matrix, or are likely to precipitate as metastable phases. Our search produces (for the structure types considered) nearly all currently known high-strength precipitates in a variety of fcc, bcc, and hcp matrices, thus giving us confidence in the strategy. In addition, we predict a number of new, currently-unknown precipitate systems that should be explored experimentally as promising high-strength alloy chemistries.

  8. The variation of acute treatment costs of trauma in high-income countries

    Directory of Open Access Journals (Sweden)

    Willenberg Lynsey

    2012-08-01

    Full Text Available Abstract Background In order to assist health service planning, understanding factors that influence higher trauma treatment costs is essential. The majority of trauma costing research reports the cost of trauma from the perspective of the receiving hospital. There has been no comprehensive synthesis and little assessment of the drivers of cost variation, such as country, trauma, subgroups and methods. The aim of this review is to provide a synthesis of research reporting the trauma treatment costs and factors associated with higher treatment costs in high income countries. Methods A systematic search for articles relating to the cost of acute trauma care was performed and included studies reporting injury severity scores (ISS, per patient cost/charge estimates; and costing methods. Cost and charge values were indexed to 2011 cost equivalents and converted to US dollars using purchasing power parities. Results A total of twenty-seven studies were reviewed. Eighty-one percent of these studies were conducted in high income countries including USA, Australia, Europe and UK. Studies either reported a cost (74.1% or charge estimate (25.9% for the acute treatment of trauma. Across studies, the median per patient cost of acute trauma treatment was $22,448 (IQR: $11,819-$33,701. However, there was variability in costing methods used with 18% of studies providing comprehensive cost methods. Sixty-three percent of studies reported cost or charge items incorporated in their cost analysis and 52% reported items excluded in their analysis. In all publications reviewed, predictors of cost included Injury Severity Score (ISS, surgical intervention, hospital and intensive care, length of stay, polytrauma and age. Conclusion The acute treatment cost of trauma is higher than other disease groups. Research has been largely conducted in high income countries and variability exists in reporting costing methods as well as the actual costs. Patient populations studied

  9. The variation of acute treatment costs of trauma in high-income countries.

    Science.gov (United States)

    Willenberg, Lynsey; Curtis, Kate; Taylor, Colman; Jan, Stephen; Glass, Parisa; Myburgh, John

    2012-08-21

    In order to assist health service planning, understanding factors that influence higher trauma treatment costs is essential. The majority of trauma costing research reports the cost of trauma from the perspective of the receiving hospital. There has been no comprehensive synthesis and little assessment of the drivers of cost variation, such as country, trauma, subgroups and methods. The aim of this review is to provide a synthesis of research reporting the trauma treatment costs and factors associated with higher treatment costs in high income countries. A systematic search for articles relating to the cost of acute trauma care was performed and included studies reporting injury severity scores (ISS), per patient cost/charge estimates; and costing methods. Cost and charge values were indexed to 2011 cost equivalents and converted to US dollars using purchasing power parities. A total of twenty-seven studies were reviewed. Eighty-one percent of these studies were conducted in high income countries including USA, Australia, Europe and UK. Studies either reported a cost (74.1%) or charge estimate (25.9%) for the acute treatment of trauma. Across studies, the median per patient cost of acute trauma treatment was $22,448 (IQR: $11,819-$33,701). However, there was variability in costing methods used with 18% of studies providing comprehensive cost methods. Sixty-three percent of studies reported cost or charge items incorporated in their cost analysis and 52% reported items excluded in their analysis. In all publications reviewed, predictors of cost included Injury Severity Score (ISS), surgical intervention, hospital and intensive care, length of stay, polytrauma and age. The acute treatment cost of trauma is higher than other disease groups. Research has been largely conducted in high income countries and variability exists in reporting costing methods as well as the actual costs. Patient populations studied and the cost methods employed are the primary drivers for the

  10. Offering lung cancer screening to high-risk medicare beneficiaries saves lives and is cost-effective: an actuarial analysis.

    Science.gov (United States)

    Pyenson, Bruce S; Henschke, Claudia I; Yankelevitz, David F; Yip, Rowena; Dec, Ellynne

    2014-08-01

    By a wide margin, lung cancer is the most significant cause of cancer death in the United States and worldwide. The incidence of lung cancer increases with age, and Medicare beneficiaries are often at increased risk. Because of its demonstrated effectiveness in reducing mortality, lung cancer screening with low-dose computed tomography (LDCT) imaging will be covered without cost-sharing starting January 1, 2015, by nongrandfathered commercial plans. Medicare is considering coverage for lung cancer screening. To estimate the cost and cost-effectiveness (ie, cost per life-year saved) of LDCT lung cancer screening of the Medicare population at high risk for lung cancer. Medicare costs, enrollment, and demographics were used for this study; they were derived from the 2012 Centers for Medicare & Medicaid Services (CMS) beneficiary files and were forecast to 2014 based on CMS and US Census Bureau projections. Standard life and health actuarial techniques were used to calculate the cost and cost-effectiveness of lung cancer screening. The cost, incidence rates, mortality rates, and other parameters chosen by the authors were taken from actual Medicare data, and the modeled screenings are consistent with Medicare processes and procedures. Approximately 4.9 million high-risk Medicare beneficiaries would meet criteria for lung cancer screening in 2014. Without screening, Medicare patients newly diagnosed with lung cancer have an average life expectancy of approximately 3 years. Based on our analysis, the average annual cost of LDCT lung cancer screening in Medicare is estimated to be $241 per person screened. LDCT screening for lung cancer in Medicare beneficiaries aged 55 to 80 years with a history of ≥30 pack-years of smoking and who had smoked within 15 years is low cost, at approximately $1 per member per month. This assumes that 50% of these patients were screened. Such screening is also highly cost-effective, at <$19,000 per life-year saved. If all eligible Medicare

  11. Computational Stimulation of the Basal Ganglia Neurons with Cost Effective Delayed Gaussian Waveforms.

    Science.gov (United States)

    Daneshzand, Mohammad; Faezipour, Miad; Barkana, Buket D

    2017-01-01

    Deep brain stimulation (DBS) has compelling results in the desynchronization of the basal ganglia neuronal activities and thus, is used in treating the motor symptoms of Parkinson's disease (PD). Accurate definition of DBS waveform parameters could avert tissue or electrode damage, increase the neuronal activity and reduce energy cost which will prolong the battery life, hence avoiding device replacement surgeries. This study considers the use of a charge balanced Gaussian waveform pattern as a method to disrupt the firing patterns of neuronal cell activity. A computational model was created to simulate ganglia cells and their interactions with thalamic neurons. From the model, we investigated the effects of modified DBS pulse shapes and proposed a delay period between the cathodic and anodic parts of the charge balanced Gaussian waveform to desynchronize the firing patterns of the GPe and GPi cells. The results of the proposed Gaussian waveform with delay outperformed that of rectangular DBS waveforms used in in-vivo experiments. The Gaussian Delay Gaussian (GDG) waveforms achieved lower number of misses in eliciting action potential while having a lower amplitude and shorter length of delay compared to numerous different pulse shapes. The amount of energy consumed in the basal ganglia network due to GDG waveforms was dropped by 22% in comparison with charge balanced Gaussian waveforms without any delay between the cathodic and anodic parts and was also 60% lower than a rectangular charged balanced pulse with a delay between the cathodic and anodic parts of the waveform. Furthermore, by defining a Synchronization Level metric, we observed that the GDG waveform was able to reduce the synchronization of GPi neurons more effectively than any other waveform. The promising results of GDG waveforms in terms of eliciting action potential, desynchronization of the basal ganglia neurons and reduction of energy consumption can potentially enhance the performance of DBS

  12. Computational Stimulation of the Basal Ganglia Neurons with Cost Effective Delayed Gaussian Waveforms

    Directory of Open Access Journals (Sweden)

    Mohammad Daneshzand

    2017-08-01

    Full Text Available Deep brain stimulation (DBS has compelling results in the desynchronization of the basal ganglia neuronal activities and thus, is used in treating the motor symptoms of Parkinson's disease (PD. Accurate definition of DBS waveform parameters could avert tissue or electrode damage, increase the neuronal activity and reduce energy cost which will prolong the battery life, hence avoiding device replacement surgeries. This study considers the use of a charge balanced Gaussian waveform pattern as a method to disrupt the firing patterns of neuronal cell activity. A computational model was created to simulate ganglia cells and their interactions with thalamic neurons. From the model, we investigated the effects of modified DBS pulse shapes and proposed a delay period between the cathodic and anodic parts of the charge balanced Gaussian waveform to desynchronize the firing patterns of the GPe and GPi cells. The results of the proposed Gaussian waveform with delay outperformed that of rectangular DBS waveforms used in in-vivo experiments. The Gaussian Delay Gaussian (GDG waveforms achieved lower number of misses in eliciting action potential while having a lower amplitude and shorter length of delay compared to numerous different pulse shapes. The amount of energy consumed in the basal ganglia network due to GDG waveforms was dropped by 22% in comparison with charge balanced Gaussian waveforms without any delay between the cathodic and anodic parts and was also 60% lower than a rectangular charged balanced pulse with a delay between the cathodic and anodic parts of the waveform. Furthermore, by defining a Synchronization Level metric, we observed that the GDG waveform was able to reduce the synchronization of GPi neurons more effectively than any other waveform. The promising results of GDG waveforms in terms of eliciting action potential, desynchronization of the basal ganglia neurons and reduction of energy consumption can potentially enhance the

  13. Computational study of a High Pressure Turbine Nozzle/Blade Interaction

    Science.gov (United States)

    Kopriva, James; Laskowski, Gregory; Sheikhi, Reza

    2015-11-01

    A downstream high pressure turbine blade has been designed for this study to be coupled with the upstream uncooled nozzle of Arts and Rouvroit [1992]. The computational domain is first held to a pitch-line section that includes no centrifugal forces (linear sliding-mesh). The stage geometry is intended to study the fundamental nozzle/blade interaction in a computationally cost efficient manner. Blade/Nozzle count of 2:1 is designed to maintain computational periodic boundary conditions for the coupled problem. Next the geometry is extended to a fully 3D domain with endwalls to understand the impact of secondary flow structures. A set of systematic computational studies are presented to understand the impact of turbulence on the nozzle and down-stream blade boundary layer development, resulting heat transfer, and downstream wake mixing in the absence of cooling. Doing so will provide a much better understanding of stage mixing losses and wall heat transfer which, in turn, can allow for improved engine performance. Computational studies are performed using WALE (Wale Adapted Local Eddy), IDDES (Improved Delayed Detached Eddy Simulation), SST (Shear Stress Transport) models in Fluent.

  14. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  15. Potentially Low Cost Solution to Extend Use of Early Generation Computed Tomography

    Directory of Open Access Journals (Sweden)

    Tonna, Joseph E

    2010-12-01

    Full Text Available In preparing a case report on Brown-Séquard syndrome for publication, we made the incidental finding that the inexpensive, commercially available three-dimensional (3D rendering software we were using could produce high quality 3D spinal cord reconstructions from any series of two-dimensional (2D computed tomography (CT images. This finding raises the possibility that spinal cord imaging capabilities can be expanded where bundled 2D multi-planar reformats and 3D reconstruction software for CT are not available and in situations where magnetic resonance imaging (MRI is either not available or appropriate (e.g. metallic implants. Given the worldwide burden of trauma and considering the limited availability of MRI and advanced generation CT scanners, we propose an alternative, potentially useful approach to imaging spinal cord that might be useful in areas where technical capabilities and support are limited. [West J Emerg Med. 2010; 11(5:463-466.

  16. Effects on costs of frontline diagnostic evaluation in patients suspected of angina: coronary computed tomography angiography vs. conventional ischaemia testing

    DEFF Research Database (Denmark)

    Nielsen, Lene H; Olsen, Jens; Markenvard, John

    2013-01-01

    group. The mean (SD) total costs per patient at the end of thefollow-up were 14% lower in the CTA group than in the ex-test group, € 1510 (3474) vs. €1777 (3746) (P = 0.03). CONCLUSION: Diagnostic assessment of symptomatic patients with a low-intermediate probability of CAD by CTA incurred lower costs......AIMS: The aim of this study was to investigate in patients with stable angina the effects on costs of frontline diagnostics by exercise-stress testing (ex-test) vs. coronary computed tomography angiography (CTA). METHODS AND RESULTS: In two coronary units at Lillebaelt Hospital, Denmark, 498...... patients were identified in whom either ex-test (n = 247) or CTA (n = 251) were applied as the frontline diagnostic strategy in symptomatic patients with a low-intermediate pre-test probability of coronary artery disease (CAD). During 12 months of follow-up, death, myocardial infarction and costs...

  17. New techniques provide low-cost X-ray inspection of highly attenuating materials

    International Nuclear Information System (INIS)

    Stupin, D.M.; Mueller, K.H.; Viskoe, D.A.; Howard, B.; Poland, R.W.; Schneberk, D.; Dolan, K.; Thompson, K.; Stoker, G.

    1995-01-01

    As a result of an arms reduction treaty between the United States and the Russian Federation, both countries will each be storing over 40,000 containers of plutonium. To help detect any deterioration of the containers and prevent leakage, the authors are designing a digital radiography and computed tomography system capable of handling this volume reliably, efficiently, and at a lower cost. The materials to be stored have very high x-ray attenuations, and, in the past, were inspected using 1- to 24-MV x-ray sources. This inspection system, however, uses a new scintillating (Lockheed) glass and an integrating CCD camera. Preliminary experiments show that this will permit the use of a 450-kV x-ray source. This low-energy system will cost much less than others designed to use a higher-energy x-ray source because it will require a less expensive source, less shielding, and less floor space. Furthermore, they can achieve a tenfold improvement in spatial resolution by using their knowledge of the point-spread function of the x-ray imaging system and a least-squares fitting technique

  18. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    Science.gov (United States)

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  19. Diagnosis of cholesteatoma by high resolution computed tomography

    International Nuclear Information System (INIS)

    Kakitsubata, Yousuke; Kakitsubata, Sachiko; Ogata, Noboru; Asada, Keiko; Watanabe, Katsushi; Tohno, Tetsuya; Makino, Kohji

    1988-01-01

    Three normal volunteers and 57 patients with cholesteatoma were examined by high resolution computed tomography. Serial sections were made through the temporal bone at the nasaly inclined position of 30 degree to the orbitomeatal line (semiaxial plane ; SAP). The findings of temporal bone structures in normal subjects were evaluated in SAP and axial plane (OM). Although the both planes showed good visualization, SAP showed both the eustachian tube and tympanic cavity in one slice. In cholesteatoma soft tissue masses in the tympanic cavity, mastoid air cells and Eustachian tube were demonstrated clearly by SAP. (author)

  20. Leveraging Cloud Computing to Improve Storage Durability, Availability, and Cost for MER Maestro

    Science.gov (United States)

    Chang, George W.; Powell, Mark W.; Callas, John L.; Torres, Recaredo J.; Shams, Khawaja S.

    2012-01-01

    The Maestro for MER (Mars Exploration Rover) software is the premiere operation and activity planning software for the Mars rovers, and it is required to deliver all of the processed image products to scientists on demand. These data span multiple storage arrays sized at 2 TB, and a backup scheme ensures data is not lost. In a catastrophe, these data would currently recover at 20 GB/hour, taking several days for a restoration. A seamless solution provides access to highly durable, highly available, scalable, and cost-effective storage capabilities. This approach also employs a novel technique that enables storage of the majority of data on the cloud and some data locally. This feature is used to store the most recent data locally in order to guarantee utmost reliability in case of an outage or disconnect from the Internet. This also obviates any changes to the software that generates the most recent data set as it still has the same interface to the file system as it did before updates

  1. Preliminary estimates of cost savings for defense high level waste vitrification options

    International Nuclear Information System (INIS)

    Merrill, R.A.; Chapman, C.C.

    1993-09-01

    The potential for realizing cost savings in the disposal of defense high-level waste through process and design modificatins has been considered. Proposed modifications range from simple changes in the canister design to development of an advanced melter capable of processing glass with a higher waste loading. Preliminary calculations estimate the total disposal cost (not including capital or operating costs) for defense high-level waste to be about $7.9 billion dollars for the reference conditions described in this paper, while projected savings resulting from the proposed process and design changes could reduce the disposal cost of defense high-level waste by up to $5.2 billion

  2. Cost of Mastitis in Scottish Dairy Herds with Low and High Subclinical Mastitis Problems

    OpenAIRE

    YALÇIN, Cengiz

    2000-01-01

    The aim of this study was to estimate the cost of mastitis and the contribution of each cost component of mastitis to the total mastitis induced cost in herds with low and high levels of subclinical mastitis under Scottish field conditions. It was estimated that mastitis cost £140 per cow/year to the average Scottish dairy farmer in 1996. However, this figure was as low as £69 per cow/year in herds with lower levels of subclinical mastitis, and as high as £228 cow/year in herds with high s...

  3. Low Cost High Performance Nanostructured Spectrally Selective Coating

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Sungho [Univ. of California, San Diego, CA (United States)

    2017-04-05

    Sunlight absorbing coating is a key enabling technology to achieve high-temperature high-efficiency concentrating solar power operation. A high-performance solar absorbing material must simultaneously meet all the following three stringent requirements: high thermal efficiency (usually measured by figure of merit), high-temperature durability, and oxidation resistance. The objective of this research is to employ a highly scalable process to fabricate and coat black oxide nanoparticles onto solar absorber surface to achieve ultra-high thermal efficiency. Black oxide nanoparticles have been synthesized using a facile process and coated onto absorber metal surface. The material composition, size distribution and morphology of the nanoparticle are guided by numeric modeling. Optical and thermal properties have been both modeled and measured. High temperature durability has been achieved by using nanocomposites and high temperature annealing. Mechanical durability on thermal cycling have also been investigated and optimized. This technology is promising for commercial applications in next-generation high-temperature concentration solar power (CSP) plants.

  4. Incremental cost of department-wide implementation of a picture archiving and communication system and computed radiography.

    Science.gov (United States)

    Pratt, H M; Langlotz, C P; Feingold, E R; Schwartz, J S; Kundel, H L

    1998-01-01

    To determine the incremental cash flows associated with department-wide implementation of a picture archiving and communication system (PACS) and computed radiography (CR) at a large academic medical center. The authors determined all capital and operational costs associated with PACS implementation during an 8-year time horizon. Economic effects were identified, adjusted for time value, and used to calculate net present values (NPVs) for each section of the department of radiology and for the department as a whole. The chest-bone section used the most resources. Changes in cost assumptions for the chest-bone section had a dominant effect on the department-wide NPV. The base-case NPV (i.e., that determined by using the initial assumptions) was negative, indicating that additional net costs are incurred by the radiology department from PACS implementation. PACS and CR provide cost savings only when a 12-year hardware life span is assumed, when CR equipment is removed from the analysis, or when digitized long-term archives are compressed at a rate of 10:1. Full PACS-CR implementation would not provide cost savings for a large, subspecialized department. However, institutions that are committed to CR implementation (for whom CR implementation would represent a sunk cost) or institutions that are able to archive images by using image compression will experience cost savings from PACS.

  5. Integrated Computational Materials Engineering (ICME) for Third Generation Advanced High-Strength Steel Development

    Energy Technology Data Exchange (ETDEWEB)

    Savic, Vesna; Hector, Louis G.; Ezzat, Hesham; Sachdev, Anil K.; Quinn, James; Krupitzer, Ronald; Sun, Xin

    2015-06-01

    This paper presents an overview of a four-year project focused on development of an integrated computational materials engineering (ICME) toolset for third generation advanced high-strength steels (3GAHSS). Following a brief look at ICME as an emerging discipline within the Materials Genome Initiative, technical tasks in the ICME project will be discussed. Specific aims of the individual tasks are multi-scale, microstructure-based material model development using state-of-the-art computational and experimental techniques, forming, toolset assembly, design optimization, integration and technical cost modeling. The integrated approach is initially illustrated using a 980 grade transformation induced plasticity (TRIP) steel, subject to a two-step quenching and partitioning (Q&P) heat treatment, as an example.

  6. Comparison of the actual costs during removal of concrete layer by high-speed water jets

    Czech Academy of Sciences Publication Activity Database

    Hela, R.; Bodnárová, L.; Novotný, M.; Sitek, Libor; Klich, Jiří; Wolf, I.; Foldyna, Josef

    2012-01-01

    Roč. 13, č. 4 (2012), s. 763-775 ISSN 1611-1699 R&D Projects: GA MŠk ED2.1.00/03.0082 Grant - others:GA TA ČR(CZ) TA01010948; GA MPO(CZ) FR-TI1/387 Institutional support: RVO:68145535 Keywords : computation model * total technological costs * total fixed costs * total variable costs * Triple helix model Subject RIV: JQ - Machines ; Tools Impact factor: 1.881, year: 2012 http://www.tandfonline.com/doi/pdf/10.3846/16111699.2011.645866

  7. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    Science.gov (United States)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  8. The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case

    Energy Technology Data Exchange (ETDEWEB)

    Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Holzman, B. [Fermilab; Kennedy, R. [Fermilab; Norman, A. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab

    2017-03-15

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 25 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  9. The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case

    Science.gov (United States)

    Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.

    2017-10-01

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  10. Training Physicians to Provide High-Value, Cost-Conscious Care A Systematic Review

    NARCIS (Netherlands)

    Stammen, L.A.; Stalmeijer, R.E.; Paternotte, E.; Pool, A.O.; Driessen, E.W.; Scheele, F.; Stassen, L.P.S.

    2015-01-01

    Importance Increasing health care expenditures are taxing the sustainability of the health care system. Physicians should be prepared to deliver high-value, cost-conscious care. Objective To understand the circumstances in which the delivery of high-value, cost-conscious care is learned, with a goal

  11. Higher-order techniques in computational electromagnetics

    CERN Document Server

    Graglia, Roberto D

    2016-01-01

    Higher-Order Techniques in Computational Electromagnetics explains 'high-order' techniques that can significantly improve the accuracy, computational cost, and reliability of computational techniques for high-frequency electromagnetics, such as antennas, microwave devices and radar scattering applications.

  12. Quantitative analysis of cholesteatoma using high resolution computed tomography

    International Nuclear Information System (INIS)

    Kikuchi, Shigeru; Yamasoba, Tatsuya; Iinuma, Toshitaka.

    1992-01-01

    Seventy-three cases of adult cholesteatoma, including 52 cases of pars flaccida type cholesteatoma and 21 of pars tensa type cholesteatoma, were examined using high resolution computed tomography, in both axial (lateral semicircular canal plane) and coronal sections (cochlear, vestibular and antral plane). These cases were classified into two subtypes according to the presence of extension of cholesteatoma into the antrum. Sixty cases of chronic otitis media with central perforation (COM) were also examined as controls. Various locations of the middle ear cavity were measured in terms of size in comparison with pars flaccida type cholesteatoma, pars tensa type cholesteatoma and COM. The width of the attic was significantly larger in both pars flaccida type and pars tensa type cholesteatoma than in COM. With pars flaccida type cholesteatoma there was a significantly larger distance between the malleus and lateral wall of the attic than with COM. In contrast, the distance between the malleus and medial wall of the attic was significantly larger with pars tensa type cholesteatoma than with COM. With cholesteatoma extending into the antrum, regardless of the type of cholesteatoma, there were significantly larger distances than with COM at the following sites: the width and height of the aditus ad antrum, and the width, height and anterior-posterior diameter of the antrum. However, these distances were not significantly different between cholesteatoma without extension into the antrum and COM. The hitherto demonstrated qualitative impressions of bone destruction in cholesteatoma were quantitatively verified in detail using high resolution computed tomography. (author)

  13. High resolution muon computed tomography at neutrino beam facilities

    International Nuclear Information System (INIS)

    Suerfu, B.; Tully, C.G.

    2016-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pion decay pipe at a neutrino beam facility and what can be achieved for momentum resolution in a muon spectrometer. Such an imaging system can be applied in archaeology, art history, engineering, material identification and whenever there is a need to image inside a transportable object constructed of dense materials

  14. High resolution computed tomography of the post partum pituitary gland

    International Nuclear Information System (INIS)

    Hinshaw, D.B.; Hasso, A.N.; Thompson, J.R.; Davidson, B.J.

    1984-01-01

    Eight volunteer post partum female patients were examined with high resolution computed tomography during the week immediately after delivery. All patients received high dose (40-70 gm) intravenous iodine contrast administration. The scans were examined for pituitary gland height, shape and homogeneity. All of the patients had enlarged glands by the traditional standards (i.e. gland height of 8 mm or greater). The diaphragma sellae in every call bulged upward with a convex domed appearance. The glands were generally inhomogeneous. One gland had a 4 mm focal well defined area of decreased attenuation. Two patients who were studied again months later had glands which had returned to ''normal'' size. The enlarged, upwardly convex pituitary gland appears to be typical and normal for the recently post partum period. (orig.)

  15. FPGA based compute nodes for high level triggering in PANDA

    International Nuclear Information System (INIS)

    Kuehn, W; Gilardi, C; Kirschner, D; Lang, J; Lange, S; Liu, M; Perez, T; Yang, S; Schmitt, L; Jin, D; Li, L; Liu, Z; Lu, Y; Wang, Q; Wei, S; Xu, H; Zhao, D; Korcyl, K; Otwinowski, J T; Salabura, P

    2008-01-01

    PANDA is a new universal detector for antiproton physics at the HESR facility at FAIR/GSI. The PANDA data acquisition system has to handle interaction rates of the order of 10 7 /s and data rates of several 100 Gb/s. FPGA based compute nodes with multi-Gb/s bandwidth capability using the ATCA architecture are designed to handle tasks such as event building, feature extraction and high level trigger processing. Data connectivity is provided via optical links as well as multiple Gb Ethernet ports. The boards will support trigger algorithms such us pattern recognition for RICH detectors, EM shower analysis, fast tracking algorithms and global event characterization. Besides VHDL, high level C-like hardware description languages will be considered to implement the firmware

  16. QSPIN: A High Level Java API for Quantum Computing Experimentation

    Science.gov (United States)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  17. Ground-glass opacity: High-resolution computed tomography and 64-multi-slice computed tomography findings comparison

    International Nuclear Information System (INIS)

    Sergiacomi, Gianluigi; Ciccio, Carmelo; Boi, Luca; Velari, Luca; Crusco, Sonia; Orlacchio, Antonio; Simonetti, Giovanni

    2010-01-01

    Objective: Comparative evaluation of ground-glass opacity using conventional high-resolution computed tomography technique and volumetric computed tomography by 64-row multi-slice scanner, verifying advantage of volumetric acquisition and post-processing technique allowed by 64-row CT scanner. Methods: Thirty-four patients, in which was assessed ground-glass opacity pattern by previous high-resolution computed tomography during a clinical-radiological follow-up for their lung disease, were studied by means of 64-row multi-slice computed tomography. Comparative evaluation of image quality was done by both CT modalities. Results: It was reported good inter-observer agreement (k value 0.78-0.90) in detection of ground-glass opacity with high-resolution computed tomography technique and volumetric Computed Tomography acquisition with moderate increasing of intra-observer agreement (k value 0.46) using volumetric computed tomography than high-resolution computed tomography. Conclusions: In our experience, volumetric computed tomography with 64-row scanner shows good accuracy in detection of ground-glass opacity, providing a better spatial and temporal resolution and advanced post-processing technique than high-resolution computed tomography.

  18. Definition, modeling and simulation of a grid computing system for high throughput computing

    CERN Document Server

    Caron, E; Tsaregorodtsev, A Yu

    2006-01-01

    In this paper, we study and compare grid and global computing systems and outline the benefits of having an hybrid system called dirac. To evaluate the dirac scheduling for high throughput computing, a new model is presented and a simulator was developed for many clusters of heterogeneous nodes belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. Next, we do the comparison with a real batch system and we obtain an average error of 10.5% for the response time and 12% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called dirac in a high throughput context. We justify our decentralized, adaptive and oppor! tunistic approach in comparison to a centralize...

  19. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  20. Investigation of Vocational High-School Students' Computer Anxiety

    Science.gov (United States)

    Tuncer, Murat; Dogan, Yunus; Tanas, Ramazan

    2013-01-01

    With the advent of the computer technologies, we are increasingly encountering these technologies in every field of life. The fact that the computer technology is so much interwoven with the daily life makes it necessary to investigate certain psychological attitudes of those working with computers towards computers. As this study is limited to…

  1. Computer simulation to predict energy use, greenhouse gas emissions and costs for production of fluid milk using alternative processing methods

    Science.gov (United States)

    Computer simulation is a useful tool for benchmarking the electrical and fuel energy consumption and water use in a fluid milk plant. In this study, a computer simulation model of the fluid milk process based on high temperature short time (HTST) pasteurization was extended to include models for pr...

  2. High-value, cost-conscious health care: concepts for clinicians to evaluate the benefits, harms, and costs of medical interventions.

    Science.gov (United States)

    Owens, Douglas K; Qaseem, Amir; Chou, Roger; Shekelle, Paul

    2011-02-01

    Health care costs in the United States are increasing unsustainably, and further efforts to control costs are inevitable and essential. Efforts to control expenditures should focus on the value, in addition to the costs, of health care interventions. Whether an intervention provides high value depends on assessing whether its health benefits justify its costs. High-cost interventions may provide good value because they are highly beneficial; conversely, low-cost interventions may have little or no value if they provide little benefit. Thus, the challenge becomes determining how to slow the rate of increase in costs while preserving high-value, high-quality care. A first step is to decrease or eliminate care that provides no benefit and may even be harmful. A second step is to provide medical interventions that provide good value: medical benefits that are commensurate with their costs. This article discusses 3 key concepts for understanding how to assess the value of health care interventions. First, assessing the benefits, harms, and costs of an intervention is essential to understand whether it provides good value. Second, assessing the cost of an intervention should include not only the cost of the intervention itself but also any downstream costs that occur because the intervention was performed. Third, the incremental cost-effectiveness ratio estimates the additional cost required to obtain additional health benefits and provides a key measure of the value of a health care intervention.

  3. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    Directory of Open Access Journals (Sweden)

    Gafurov Andrey

    2018-01-01

    Full Text Available The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the “Project analysis scenario” flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  4. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    Science.gov (United States)

    Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir

    2018-03-01

    The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  5. Thoracoabdominal computed tomography in trauma patients: a cost-consequences analysis

    NARCIS (Netherlands)

    Vugt, R. van; Kool, D.R.; Brink, M.; Dekker, H.M.; Deunk, J.; Edwards, M.J.R.

    2014-01-01

    BACKGROUND: CT is increasingly used during the initial evaluation of blunt trauma patients. In this era of increasing cost-awareness, the pros and cons of CT have to be assessed. OBJECTIVES: This study was performed to evaluate cost-consequences of different diagnostic algorithms that use

  6. Computer software to estimate timber harvesting system production, cost, and revenue

    Science.gov (United States)

    Dr. John E. Baumgras; Dr. Chris B. LeDoux

    1992-01-01

    Large variations in timber harvesting cost and revenue can result from the differences between harvesting systems, the variable attributes of harvesting sites and timber stands, or changing product markets. Consequently, system and site specific estimates of production rates and costs are required to improve estimates of harvesting revenue. This paper describes...

  7. Computation of piecewise affine terminal cost functions for model predictive control

    NARCIS (Netherlands)

    Brunner, F.D.; Lazar, M.; Allgöwer, F.; Fränzle, Martin; Lygeros, John

    2014-01-01

    This paper proposes a method for the construction of piecewise affine terminal cost functions for model predictive control (MPC). The terminal cost function is constructed on a predefined partition by solving a linear program for a given piecewise affine system, a stabilizing piecewise affine

  8. Social cost of heavy drinking and alcohol dependence in high-income countries.

    Science.gov (United States)

    Mohapatra, Satya; Patra, Jayadeep; Popova, Svetlana; Duhig, Amy; Rehm, Jürgen

    2010-06-01

    A comprehensive review of cost drivers associated with alcohol abuse, heavy drinking, and alcohol dependence for high-income countries was conducted. The data from 14 identified cost studies were tabulated according to the potential direct and indirect cost drivers. The costs associated with alcohol abuse, alcohol dependence, and heavy drinking were calculated. The weighted average of the total societal cost due to alcohol abuse as percent gross domestic product (GDP)--purchasing power parity (PPP)--was 1.58%. The cost due to heavy drinking and/or alcohol dependence as percent GDP (PPP) was estimated to be 0.96%. On average, the alcohol-attributable indirect cost due to loss of productivity is more than the alcohol-attributable direct cost. Most of the countries seem to incur 1% or more of their GDP (PPP) as alcohol-attributable costs, which is a high toll for a single factor and an enormous burden on public health. The majority of alcohol-attributable costs incurred as a consequence of heavy drinking and/or alcohol dependence. Effective prevention and treatment measures should be implemented to reduce these costs.

  9. Cost-effectiveness of computed tomography colonography in colorectal cancer screening: a systematic review.

    Science.gov (United States)

    Hanly, Paul; Skally, Mairead; Fenlon, Helen; Sharp, Linda

    2012-10-01

    The European Code Against Cancer recommends individuals aged ≥ 50 should participate in colorectal cancer screening. CT-colonography (CTC) is one of several screening tests available. We systematically reviewed evidence on, and identified key factors influencing, cost-effectiveness of CTC screening. PubMed, Medline, and the Cochrane library were searched for cost-effectiveness or cost-utility analyses of CTC-based screening, published in English, January 1999 to July 2010. Data was abstracted on setting, model type and horizon, screening scenario(s), comparator(s), participants, uptake, CTC performance and cost, effectiveness, ICERs, and whether extra-colonic findings and medical complications were considered. Sixteen studies were identified from the United States (n = 11), Canada (n = 2), and France, Italy, and the United Kingdom (1 each). Markov state-transition (n = 14) or microsimulation (n = 2) models were used. Eleven considered direct medical costs only; five included indirect costs. Fourteen compared CTC with no screening; fourteen compared CTC with colonoscopy-based screening; fewer compared CTC with sigmoidoscopy (8) or fecal tests (4). Outcomes assessed were life-years gained/saved (13), QALYs (2), or both (1). Three considered extra-colonic findings; seven considered complications. CTC appeared cost-effective versus no screening and, in general, flexible sigmoidoscopy and fecal occult blood testing. Results were mixed comparing CTC to colonoscopy. Parameters most influencing cost-effectiveness included: CTC costs, screening uptake, threshold for polyp referral, and extra-colonic findings. Evidence on cost-effectiveness of CTC screening is heterogeneous, due largely to between-study differences in comparators and parameter values. Future studies should: compare CTC with currently favored tests, especially fecal immunochemical tests; consider extra-colonic findings; and conduct comprehensive sensitivity analyses.

  10. Integrated Computational Materials Engineering Development of Advanced High Strength Steel for Lightweight Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Hector, Jr., Louis G. [General Motors, Warren, MI (United States); McCarty, Eric D. [United States Automotive Materials Partnership LLC (USAMP), Southfield, MI (United States)

    2017-07-31

    The goal of the ICME 3GAHSS project was to successfully demonstrate the applicability of Integrated Computational Materials Engineering (ICME) for the development and deployment of third generation advanced high strength steels (3GAHSS) for immediate weight reduction in passenger vehicles. The ICME approach integrated results from well-established computational and experimental methodologies to develop a suite of material constitutive models (deformation and failure), manufacturing process and performance simulation modules, a properties database, as well as the computational environment linking them together for both performance prediction and material optimization. This is the Final Report for the ICME 3GAHSS project, which achieved the fol-lowing objectives: 1) Developed a 3GAHSS ICME model, which includes atomistic, crystal plasticity, state variable and forming models. The 3GAHSS model was implemented in commercially available LS-DYNA and a user guide was developed to facilitate use of the model. 2) Developed and produced two 3GAHSS alloys using two different chemistries and manufacturing processes, for use in calibrating and validating the 3GAHSS ICME Model. 3) Optimized the design of an automotive subassembly by substituting 3GAHSS for AHSS yielding a design that met or exceeded all baseline performance requirements with a 30% mass savings. A technical cost model was also developed to estimate the cost per pound of weight saved when substituting 3GAHSS for AHSS. The project demonstrated the potential for 3GAHSS to achieve up to 30% weight savings in an automotive structure at a cost penalty of up to $0.32 to $1.26 per pound of weight saved. The 3GAHSS ICME Model enables the user to design 3GAHSS to desired mechanical properties in terms of strength and ductility.

  11. Proceedings of the workshop on high resolution computed microtomography (CMT)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    The purpose of the workshop was to determine the status of the field, to define instrumental and computational requirements, and to establish minimum specifications required by possible users. The most important message sent by implementers was the remainder that CMT is a tool. It solves a wide spectrum of scientific problems and is complementary to other microscopy techniques, with certain important advantages that the other methods do not have. High-resolution CMT can be used non-invasively and non-destructively to study a variety of hierarchical three-dimensional microstructures, which in turn control body function. X-ray computed microtomography can also be used at the frontiers of physics, in the study of granular systems, for example. With high-resolution CMT, for example, three-dimensional pore geometries and topologies of soils and rocks can be obtained readily and implemented directly in transport models. In turn, these geometries can be used to calculate fundamental physical properties, such as permeability and electrical conductivity, from first principles. Clearly, use of the high-resolution CMT technique will contribute tremendously to the advancement of current R and D technologies in the production, transport, storage, and utilization of oil and natural gas. It can also be applied to problems related to environmental pollution, particularly to spilling and seepage of hazardous chemicals into the Earth's subsurface. Applications to energy and environmental problems will be far-ranging and may soon extend to disciplines such as materials science--where the method can be used in the manufacture of porous ceramics, filament-resin composites, and microelectronics components--and to biomedicine, where it could be used to design biocompatible materials such as artificial bones, contact lenses, or medication-releasing implants. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  12. A Framework for Debugging Geoscience Projects in a High Performance Computing Environment

    Science.gov (United States)

    Baxter, C.; Matott, L.

    2012-12-01

    High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.

  13. Enabling the ATLAS Experiment at the LHC for High Performance Computing

    CERN Document Server

    AUTHOR|(CDS)2091107; Ereditato, Antonio

    In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relatively small dedicated com...

  14. Computation of High-Frequency Waves with Random Uncertainty

    KAUST Repository

    Malenova, Gabriela

    2016-01-06

    We consider the forward propagation of uncertainty in high-frequency waves, described by the second order wave equation with highly oscillatory initial data. The main sources of uncertainty are the wave speed and/or the initial phase and amplitude, described by a finite number of random variables with known joint probability distribution. We propose a stochastic spectral asymptotic method [1] for computing the statistics of uncertain output quantities of interest (QoIs), which are often linear or nonlinear functionals of the wave solution and its spatial/temporal derivatives. The numerical scheme combines two techniques: a high-frequency method based on Gaussian beams [2, 3], a sparse stochastic collocation method [4]. The fast spectral convergence of the proposed method depends crucially on the presence of high stochastic regularity of the QoI independent of the wave frequency. In general, the high-frequency wave solutions to parametric hyperbolic equations are highly oscillatory and non-smooth in both physical and stochastic spaces. Consequently, the stochastic regularity of the QoI, which is a functional of the wave solution, may in principle below and depend on frequency. In the present work, we provide theoretical arguments and numerical evidence that physically motivated QoIs based on local averages of |uE|2 are smooth, with derivatives in the stochastic space uniformly bounded in E, where uE and E denote the highly oscillatory wave solution and the short wavelength, respectively. This observable related regularity makes the proposed approach more efficient than current asymptotic approaches based on Monte Carlo sampling techniques.

  15. A nearly-linear computational-cost scheme for the forward dynamics of an N-body pendulum

    Science.gov (United States)

    Chou, Jack C. K.

    1989-01-01

    The dynamic equations of motion of an n-body pendulum with spherical joints are derived to be a mixed system of differential and algebraic equations (DAE's). The DAE's are kept in implicit form to save arithmetic and preserve the sparsity of the system and are solved by the robust implicit integration method. At each solution point, the predicted solution is corrected to its exact solution within given tolerance using Newton's iterative method. For each iteration, a linear system of the form J delta X = E has to be solved. The computational cost for solving this linear system directly by LU factorization is O(n exp 3), and it can be reduced significantly by exploring the structure of J. It is shown that by recognizing the recursive patterns and exploiting the sparsity of the system the multiplicative and additive computational costs for solving J delta X = E are O(n) and O(n exp 2), respectively. The formulation and solution method for an n-body pendulum is presented. The computational cost is shown to be nearly linearly proportional to the number of bodies.

  16. A high dutycycle low cost multichannel analyser for electron spectroscopy

    International Nuclear Information System (INIS)

    Norell, K.E.; Baltzer, P.

    1983-03-01

    A high dutycycle multichannel analyzer has been designed and used in time-of-flight electron spectroscopy. The memory capacity is 64k counts. The number of channels is 8192 with a time resolution of 100 ns. An oscilloscope is used to display the spectra synchronous with the counting. The unit has been built with standard electronic components. (author)

  17. Calculus in High School--At What Cost?

    Science.gov (United States)

    Sorge, D. H.; Wheatley, G. H.

    1977-01-01

    Evidence on the decline in preparation of entering calculus students and the relationship to high school preparation is presented, focusing on the trend toward the de-emphasis of trigonometry and analytic geometry in favor of calculus. Data on students' perception of the adequacy of their preparation are also presented. (Author/MN)

  18. A computational model for determining the minimal cost expansion alternatives in transmission systems planning

    International Nuclear Information System (INIS)

    Pinto, L.M.V.G.; Pereira, M.V.F.; Nunes, A.

    1989-01-01

    A computational model for determining an economical transmission expansion plan, based in the decomposition techniques is presented. The algorithm was used in the Brazilian South System and was able to find an optimal solution, with a low computational resource. Some expansions of this methodology are been investigated: the probabilistic one and the expansion with financier restriction. (C.G.C.). 4 refs, 7 figs

  19. Paper Circuits: A Tangible, Low Threshold, Low Cost Entry to Computational Thinking

    Science.gov (United States)

    Lee, Victor R.; Recker, Mimi

    2018-01-01

    In this paper, we propose that paper circuitry provides a productive space for exploring aspects of computational thinking, an increasingly critical 21st century skills for all students. We argue that the creation and operation of paper circuits involve learning about computational concepts such as rule-based constraints, operations, and defined…

  20. Computational Fluid Dynamics Analysis of High Injection Pressure Blended Biodiesel

    Science.gov (United States)

    Khalid, Amir; Jaat, Norrizam; Faisal Hushim, Mohd; Manshoor, Bukhari; Zaman, Izzuddin; Sapit, Azwan; Razali, Azahari

    2017-08-01

    Biodiesel have great potential for substitution with petrol fuel for the purpose of achieving clean energy production and emission reduction. Among the methods that can control the combustion properties, controlling of the fuel injection conditions is one of the successful methods. The purpose of this study is to investigate the effect of high injection pressure of biodiesel blends on spray characteristics using Computational Fluid Dynamics (CFD). Injection pressure was observed at 220 MPa, 250 MPa and 280 MPa. The ambient temperature was kept held at 1050 K and ambient pressure 8 MPa in order to simulate the effect of boost pressure or turbo charger during combustion process. Computational Fluid Dynamics were used to investigate the spray characteristics of biodiesel blends such as spray penetration length, spray angle and mixture formation of fuel-air mixing. The results shows that increases of injection pressure, wider spray angle is produced by biodiesel blends and diesel fuel. The injection pressure strongly affects the mixture formation, characteristics of fuel spray, longer spray penetration length thus promotes the fuel and air mixing.

  1. Computational aspects in high intensity ultrasonic surgery planning.

    Science.gov (United States)

    Pulkkinen, A; Hynynen, K

    2010-01-01

    Therapeutic ultrasound treatment planning is discussed and computational aspects regarding it are reviewed. Nonlinear ultrasound simulations were solved with a combined frequency domain Rayleigh and KZK model. Ultrasonic simulations were combined with thermal simulations and were used to compute heating of muscle tissue in vivo for four different focused ultrasound transducers. The simulations were compared with measurements and good agreement was found for large F-number transducers. However, at F# 1.9 the simulated rate of temperature rise was approximately a factor of 2 higher than the measured ones. The power levels used with the F# 1 transducer were too low to show any nonlinearity. The simulations were used to investigate the importance of nonlinarities generated in the coupling water, and also the importance of including skin in the simulations. Ignoring either of these in the model would lead to larger errors. Most notably, the nonlinearities generated in the water can enhance the focal temperature by more than 100%. The simulations also demonstrated that pulsed high power sonications may provide an opportunity to significantly (up to a factor of 3) reduce the treatment time. In conclusion, nonlinear propagation can play an important role in shaping the energy distribution during a focused ultrasound treatment and it should not be ignored in planning. However, the current simulation methods are accurate only with relatively large F-numbers and better models need to be developed for sharply focused transducers. Copyright 2009 Elsevier Ltd. All rights reserved.

  2. Low-Cost, High-Performance Hall Thruster Support System

    Science.gov (United States)

    Hesterman, Bryce

    2015-01-01

    Colorado Power Electronics (CPE) has built an innovative modular PPU for Hall thrusters, including discharge, magnet, heater and keeper supplies, and an interface module. This high-performance PPU offers resonant circuit topologies, magnetics design, modularity, and a stable and sustained operation during severe Hall effect thruster current oscillations. Laboratory testing has demonstrated discharge module efficiency of 96 percent, which is considerably higher than current state of the art.

  3. Computed tomographic colonography to screen for colorectal cancer, extracolonic cancer, and aortic aneurysm: model simulation with cost-effectiveness analysis.

    Science.gov (United States)

    Hassan, Cesare; Pickhardt, Perry J; Pickhardt, Perry; Laghi, Andrea; Kim, Daniel H; Kim, Daniel; Zullo, Angelo; Iafrate, Franco; Di Giulio, Lorenzo; Morini, Sergio

    2008-04-14

    In addition to detecting colorectal neoplasia, abdominal computed tomography (CT) with colonography technique (CTC) can also detect unsuspected extracolonic cancers and abdominal aortic aneurysms (AAA).The efficacy and cost-effectiveness of this combined abdominal CT screening strategy are unknown. A computerized Markov model was constructed to simulate the occurrence of colorectal neoplasia, extracolonic malignant neoplasm, and AAA in a hypothetical cohort of 100,000 subjects from the United States who were 50 years of age. Simulated screening with CTC, using a 6-mm polyp size threshold for reporting, was compared with a competing model of optical colonoscopy (OC), both without and with abdominal ultrasonography for AAA detection (OC-US strategy). In the simulated population, CTC was the dominant screening strategy, gaining an additional 1458 and 462 life-years compared with the OC and OC-US strategies and being less costly, with a savings of $266 and $449 per person, respectively. The additional gains for CTC were largely due to a decrease in AAA-related deaths, whereas the modeled benefit from extracolonic cancer downstaging was a relatively minor factor. At sensitivity analysis, OC-US became more cost-effective only when the CTC sensitivity for large polyps dropped to 61% or when broad variations of costs were simulated, such as an increase in CTC cost from $814 to $1300 or a decrease in OC cost from $1100 to $500. With the OC-US approach, suboptimal compliance had a strong negative influence on efficacy and cost-effectiveness. The estimated mortality from CT-induced cancer was less than estimated colonoscopy-related mortality (8 vs 22 deaths), both of which were minor compared with the positive benefit from screening. When detection of extracolonic findings such as AAA and extracolonic cancer are considered in addition to colorectal neoplasia in our model simulation, CT colonography is a dominant screening strategy (ie, more clinically effective and more cost

  4. Power/energy use cases for high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Steven [National Renewable Energy Lab. (NREL), Golden, CO (United States); Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munch, Kristin [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  5. High resolution computed tomography of the middle ear

    International Nuclear Information System (INIS)

    Ikeda, Katsuhisa; Sakurai, Tokio; Saijo, Shigeru; Kobayashi, Toshimitsu

    1983-01-01

    High resolution computed tomography was performed in 57 cases with various middle ear diseases (chronic otitis media, otitis media with effusion, acute otitis media and atelectasis). Although further improvement in detectability is necessary in order to discriminate each type of the soft tissue lesions, CT is the most useful method currently available in detecting the small structures and soft tissue lesions of the middle ear. In particular, the lesions at the tympanic isthmus and tympanic fold could very clearly be detected only by CT. In acute otitis media, lesions usually started in the attic and spread to the mastoid air cells. In otitis media with effusion, the soft tissue shadow was ovserved in the attic and mastoid air cell. CT is valuable in diagnosis, evaluation of the treatment and prognosis, and analysis of pathophysiology in the middle ear diseases. (author)

  6. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  7. Electromagnetic Modeling of Human Body Using High Performance Computing

    Science.gov (United States)

    Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada

    Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.

  8. Paracoccidioidomycosis: High-resolution computed tomography-pathologic correlation

    International Nuclear Information System (INIS)

    Marchiori, Edson; Valiante, Paulo Marcos; Mano, Claudia Mauro; Zanetti, Glaucia; Escuissato, Dante L.; Souza, Arthur Soares; Capone, Domenico

    2011-01-01

    Objective: The purpose of this study was to describe the high-resolution computed tomography (HRCT) features of pulmonary paracoccidioidomycosis and to correlate them with pathologic findings. Methods: The study included 23 adult patients with pulmonary paracoccidioidomycosis. All patients had undergone HRCT, and the images were retrospectively analyzed by two chest radiologists, who reached decisions by consensus. An experienced lung pathologist reviewed all pathological specimens. The HRCT findings were correlated with histopathologic data. Results: The predominant HRCT findings included areas of ground-glass opacities, nodules, interlobular septal thickening, airspace consolidation, cavitation, and fibrosis. The main pathological features consisted of alveolar and interlobular septal inflammatory infiltration, granulomas, alveolar exudate, cavitation secondary to necrosis, and fibrosis. Conclusion: Paracoccidioidomycosis can present different tomography patterns, which can involve both the interstitium and the airspace. These abnormalities can be pathologically correlated with inflammatory infiltration, granulomatous reaction, and fibrosis.

  9. Near DC eddy current measurement of aluminum multilayers using MR sensors and commodity low-cost computer technology

    Science.gov (United States)

    Perry, Alexander R.

    2002-06-01

    Low Frequency Eddy Current (EC) probes are capable of measurement from 5 MHz down to DC through the use of Magnetoresistive (MR) sensors. Choosing components with appropriate electrical specifications allows them to be matched to the power and impedance characteristics of standard computer connectors. This permits direct attachment of the probe to inexpensive computers, thereby eliminating external power supplies, amplifiers and modulators that have heretofore precluded very low system purchase prices. Such price reduction is key to increased market penetration in General Aviation maintenance and consequent reduction in recurring costs. This paper examines our computer software CANDETECT, which implements this approach and permits effective probe operation. Results are presented to show the intrinsic sensitivity of the software and demonstrate its practical performance when seeking cracks in the underside of a thick aluminum multilayer structure. The majority of the General Aviation light aircraft fleet uses rivets and screws to attach sheet aluminum skin to the airframe, resulting in similar multilayer lap joints.

  10. Integrated cost estimation methodology to support high-performance building design

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, Prasad; Greden, Lara; Eijadi, David; McDougall, Tom [The Weidt Group, Minnetonka (United States); Cole, Ray [Axiom Engineers, Monterey (United States)

    2007-07-01

    Design teams evaluating the performance of energy conservation measures (ECMs) calculate energy savings rigorously with established modelling protocols, accounting for the interaction between various measures. However, incremental cost calculations do not have a similar rigor. Often there is no recognition of cost reductions with integrated design, nor is there assessment of cost interactions amongst measures. This lack of rigor feeds the notion that high-performance buildings cost more, creating a barrier for design teams pursuing aggressive high-performance outcomes. This study proposes an alternative integrated methodology to arrive at a lower perceived incremental cost for improved energy performance. The methodology is based on the use of energy simulations as means towards integrated design and cost estimation. Various points along the spectrum of integration are identified and characterized by the amount of design effort invested, the scheduling of effort, and relative energy performance of the resultant design. It includes a study of the interactions between building system parameters as they relate to capital costs. Several cost interactions amongst energy measures are found to be significant.The value of this approach is demonstrated with alternatives in a case study that shows the differences between perceived costs for energy measures along various points on the integration spectrum. These alternatives show design tradeoffs and identify how decisions would have been different with a standard costing approach. Areas of further research to make the methodology more robust are identified. Policy measures to encourage the integrated approach and reduce the barriers towards improved energy performance are discussed.

  11. Minimizing total costs of forest roads with computer-aided design ...

    Indian Academy of Sciences (India)

    imum total road costs, while conforming to design specifications, environmental ..... quality, and enhancing fish and wildlife habitat, an appropriate design ..... Soil, Water and Timber Management: Forest Engineering Solutions in Response to.

  12. A probabilistic approach to the computation of the levelized cost of electricity

    International Nuclear Information System (INIS)

    Geissmann, Thomas

    2017-01-01

    This paper sets forth a novel approach to calculate the levelized cost of electricity (LCOE) using a probabilistic model that accounts for endogenous input parameters. The approach is applied to the example of a nuclear and gas power project. Monte Carlo simulation results show that a correlation between input parameters has a significant effect on the model outcome. By controlling for endogeneity, a statistically significant difference in the mean LCOE estimate and a change in the order of input leverages is observed. Moreover, the paper discusses the role of discounting options and external costs in detail. In contrast to the gas power project, the economic viability of the nuclear project is considerably weaker. - Highlights: • First model of levelized cost of electricity accounting for uncertainty and endogeneities in input parameters. • Allowance for endogeneities significantly affects results. • Role of discounting options and external costs is discussed and modelled.

  13. Model implementation for dynamic computation of system cost for advanced life support

    Science.gov (United States)

    Levri, J. A.; Vaccari, D. A.

    2004-01-01

    Life support system designs for long-duration space missions have a multitude of requirements drivers, such as mission objectives, political considerations, cost, crew wellness, inherent mission attributes, as well as many other influences. Evaluation of requirements satisfaction can be difficult, particularly at an early stage of mission design. Because launch cost is a critical factor and relatively easy to quantify, it is a point of focus in early mission design. The method used to determine launch cost influences the accuracy of the estimate. This paper discusses the appropriateness of dynamic mission simulation in estimating the launch cost of a life support system. This paper also provides an abbreviated example of a dynamic simulation life support model and possible ways in which such a model might be utilized for design improvement. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  14. High-order computer-assisted estimates of topological entropy

    Science.gov (United States)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  15. Development of superconductor electronics technology for high-end computing

    Energy Technology Data Exchange (ETDEWEB)

    Silver, A [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Kleinsasser, A [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Kerber, G [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Herr, Q [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Dorojevets, M [Department of Electrical and Computer Engineering, SUNY-Stony Brook, NY 11794-2350 (United States); Bunyk, P [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Abelson, L [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States)

    2003-12-01

    This paper describes our programme to develop and demonstrate ultra-high performance single flux quantum (SFQ) VLSI technology that will enable superconducting digital processors for petaFLOPS-scale computing. In the hybrid technology, multi-threaded architecture, the computational engine to power a petaFLOPS machine at affordable power will consist of 4096 SFQ multi-chip processors, with 50 to 100 GHz clock frequency and associated cryogenic RAM. We present the superconducting technology requirements, progress to date and our plan to meet these requirements. We improved SFQ Nb VLSI by two generations, to a 8 kA cm{sup -2}, 1.25 {mu}m junction process, incorporated new CAD tools into our methodology, demonstrated methods for recycling the bias current and data communication at speeds up to 60 Gb s{sup -1}, both on and between chips through passive transmission lines. FLUX-1 is the most ambitious project implemented in SFQ technology to date, a prototype general-purpose 8 bit microprocessor chip. We are testing the FLUX-1 chip (5K gates, 20 GHz clock) and designing a 32 bit floating-point SFQ multiplier with vector-register memory. We report correct operation of the complete stripline-connected gate library with large bias margins, as well as several larger functional units used in FLUX-1. The next stage will be an SFQ multi-processor machine. Important challenges include further reducing chip supply current and on-chip power dissipation, developing at least 64 kbit, sub-nanosecond cryogenic RAM chips, developing thermally and electrically efficient high data rate cryogenic-to-ambient input/output technology and improving Nb VLSI to increase gate density.

  16. Development of superconductor electronics technology for high-end computing

    International Nuclear Information System (INIS)

    Silver, A; Kleinsasser, A; Kerber, G; Herr, Q; Dorojevets, M; Bunyk, P; Abelson, L

    2003-01-01

    This paper describes our programme to develop and demonstrate ultra-high performance single flux quantum (SFQ) VLSI technology that will enable superconducting digital processors for petaFLOPS-scale computing. In the hybrid technology, multi-threaded architecture, the computational engine to power a petaFLOPS machine at affordable power will consist of 4096 SFQ multi-chip processors, with 50 to 100 GHz clock frequency and associated cryogenic RAM. We present the superconducting technology requirements, progress to date and our plan to meet these requirements. We improved SFQ Nb VLSI by two generations, to a 8 kA cm -2 , 1.25 μm junction process, incorporated new CAD tools into our methodology, demonstrated methods for recycling the bias current and data communication at speeds up to 60 Gb s -1 , both on and between chips through passive transmission lines. FLUX-1 is the most ambitious project implemented in SFQ technology to date, a prototype general-purpose 8 bit microprocessor chip. We are testing the FLUX-1 chip (5K gates, 20 GHz clock) and designing a 32 bit floating-point SFQ multiplier with vector-register memory. We report correct operation of the complete stripline-connected gate library with large bias margins, as well as several larger functional units used in FLUX-1. The next stage will be an SFQ multi-processor machine. Important challenges include further reducing chip supply current and on-chip power dissipation, developing at least 64 kbit, sub-nanosecond cryogenic RAM chips, developing thermally and electrically efficient high data rate cryogenic-to-ambient input/output technology and improving Nb VLSI to increase gate density

  17. Computation of spot prices and congestion costs in large interconnected power systems

    International Nuclear Information System (INIS)

    Mukerji, R.; Jordan, G.A.; Clayton, R.; Haringa, G.E.

    1995-01-01

    Foremost among the new paradigms for the US utility industry is the ''poolco'' concept proposed by Prof. William W. Hogan of Harvard University. This concept uses a central pool or power exchange in which physical power is traded based on spot prices or market clearing prices. The rapid and accurate calculation of these ''spot'' prices and associated congestion costs for large interconnected power systems is the central tenet upon which the poolco concept is based. The market clearing price would be the same throughout the system if there were no system losses and transmission limitations did not exist. System losses cause small differences in market clearing prices as the cost of supplying a MW at various load buses includes the cost of losses. Transmission limits may cause large differences in market clearing prices between regions as low cost generation is blocked by the transmission constraints from serving certain loads. In models currently in use in the electric power industry spot price calculations range from ''bubble diagram'' type contract path models to full electrical representation such as GE-MAPS. The modeling aspects of the full electrical representation are included in the Appendix. The problem with the bubble diagram representation is that these models are liable to produce unacceptably large errors in the calculation of spot prices and congestion costs. The subtleties of the calculation of spot prices and congestion costs are illustrated in this paper

  18. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  19. Can broader diffusion of value-based insurance design increase benefits from US health care without increasing costs? Evidence from a computer simulation model.

    Directory of Open Access Journals (Sweden)

    R Scott Braithwaite

    2010-02-01

    Full Text Available BACKGROUND: Evidence suggests that cost sharing (i.e.,copayments and deductibles decreases health expenditures but also reduces essential care. Value-based insurance design (VBID has been proposed to encourage essential care while controlling health expenditures. Our objective was to estimate the impact of broader diffusion of VBID on US health care benefits and costs. METHODS AND FINDINGS: We used a published computer simulation of costs and life expectancy gains from US health care to estimate the impact of broader diffusion of VBID. Two scenarios were analyzed: (1 applying VBID solely to pharmacy benefits and (2 applying VBID to both pharmacy benefits and other health care services (e.g., devices. We assumed that cost sharing would be eliminated for high-value services ($300,000 per life-year. All costs are provided in 2003 US dollars. Our simulation estimated that approximately 60% of health expenditures in the US are spent on low-value services, 20% are spent on intermediate-value services, and 20% are spent on high-value services. Correspondingly, the vast majority (80% of health expenditures would have cost sharing that is impacted by VBID. With prevailing patterns of cost sharing, health care conferred 4.70 life-years at a per-capita annual expenditure of US$5,688. Broader diffusion of VBID to pharmaceuticals increased the benefit conferred by health care by 0.03 to 0.05 additional life-years, without increasing costs and without increasing out-of-pocket payments. Broader diffusion of VBID to other health care services could increase the benefit conferred by health care by 0.24 to 0.44 additional life-years, also without increasing costs and without increasing overall out-of-pocket payments. Among those without health insurance, using cost saving from VBID to subsidize insurance coverage would increase the benefit conferred by health care by 1.21 life-years, a 31% increase. CONCLUSION: Broader diffusion of VBID may amplify benefits from

  20. Can broader diffusion of value-based insurance design increase benefits from US health care without increasing costs? Evidence from a computer simulation model.

    Science.gov (United States)

    Braithwaite, R Scott; Omokaro, Cynthia; Justice, Amy C; Nucifora, Kimberly; Roberts, Mark S

    2010-02-16

    Evidence suggests that cost sharing (i.e.,copayments and deductibles) decreases health expenditures but also reduces essential care. Value-based insurance design (VBID) has been proposed to encourage essential care while controlling health expenditures. Our objective was to estimate the impact of broader diffusion of VBID on US health care benefits and costs. We used a published computer simulation of costs and life expectancy gains from US health care to estimate the impact of broader diffusion of VBID. Two scenarios were analyzed: (1) applying VBID solely to pharmacy benefits and (2) applying VBID to both pharmacy benefits and other health care services (e.g., devices). We assumed that cost sharing would be eliminated for high-value services (value services ($100,000-$300,000 per life-year or unknown), and would be increased for low-value services (>$300,000 per life-year). All costs are provided in 2003 US dollars. Our simulation estimated that approximately 60% of health expenditures in the US are spent on low-value services, 20% are spent on intermediate-value services, and 20% are spent on high-value services. Correspondingly, the vast majority (80%) of health expenditures would have cost sharing that is impacted by VBID. With prevailing patterns of cost sharing, health care conferred 4.70 life-years at a per-capita annual expenditure of US$5,688. Broader diffusion of VBID to pharmaceuticals increased the benefit conferred by health care by 0.03 to 0.05 additional life-years, without increasing costs and without increasing out-of-pocket payments. Broader diffusion of VBID to other health care services could increase the benefit conferred by health care by 0.24 to 0.44 additional life-years, also without increasing costs and without increasing overall out-of-pocket payments. Among those without health insurance, using cost saving from VBID to subsidize insurance coverage would increase the benefit conferred by health care by 1.21 life-years, a 31% increase

  1. Concept and computation of radiation dose at high energies

    International Nuclear Information System (INIS)

    Sarkar, P.K.

    2010-01-01

    Computational dosimetry, a subdiscipline of computational physics devoted to radiation metrology, is determination of absorbed dose and other dose related quantities by numbers. Computations are done separately both for external and internal dosimetry. The methodology used in external beam dosimetry is necessarily a combination of experimental radiation dosimetry and theoretical dose computation since it is not feasible to plan any physical dose measurements from inside a living human body

  2. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  3. Computer Simulation Studies of Ion Channels at High Temperatures

    Science.gov (United States)

    Song, Hyun Deok

    The gramicidin channel is the smallest known biological ion channel, and it exhibits cation selectivity. Recently, Dr. John Cuppoletti's group at the University of Cincinnati showed that the gramicidin channel can function at high temperatures (360 ˜ 380K) with significant currents. This finding may have significant implications for fuel cell technology. In this thesis, we have examined the gramicidin channel at 300K, 330K, and 360K by computer simulation. We have investigated how the temperature affects the current and differences in magnitude of free energy between the two gramicidin forms, the helical dimer (HD) and the double helix (DH). A slight decrease of the free energy barrier inside the gramicidin channel and increased diffusion at high temperatures result in an increase of current. An applied external field of 0.2V/nm along the membrane normal results in directly observable ion transport across the channels at high temperatures for both HD and DH forms. We found that higher temperatures also affect the probability distribution of hydrogen bonds, the bending angle, the distance between dimers, and the size of the pore radius for the helical dimer structure. These findings may be related to the gating of the gramicidin channel. Methanococcus jannaschii (MJ) is a methane-producing thermophile, which was discovered at a depth of 2600m in a Pacific Ocean vent in 1983. It has the ability to thrive at high temperatures and high pressures, which are unfavorable for most life forms. There have been some experiments to study its stability under extreme conditions, but still the origin of the stability of MJ is not exactly known. MJ0305 is the chloride channel protein from the thermophile MJ. After generating a structure of MJ0305 by homology modeling based on the Ecoli ClC templates, we examined the thermal stability, and the network stability from the change of network entropy calculated from the adjacency matrices of the protein. High temperatures increase the

  4. Medicaid care management: description of high-cost addictions treatment clients.

    Science.gov (United States)

    Neighbors, Charles J; Sun, Yi; Yerneni, Rajeev; Tesiny, Ed; Burke, Constance; Bardsley, Leland; McDonald, Rebecca; Morgenstern, Jon

    2013-09-01

    High utilizers of alcohol and other drug treatment (AODTx) services are a priority for healthcare cost control. We examine characteristics of Medicaid-funded AODTx clients, comparing three groups: individuals cost clients in the top decile of AODTx expenditures (HC; n=5,718); and 1760 enrollees in a chronic care management (CM) program for HC clients implemented in 22 counties in New York State. Medicaid and state AODTx registry databases were combined to draw demographic, clinical, social needs and treatment history data. HC clients accounted for 49% of AODTx costs funded by Medicaid. As expected, HC clients had significant social welfare needs, comorbid medical and psychiatric conditions, and use of inpatient services. The CM program was successful in enrolling some high-needs, high-cost clients but faced barriers to reaching the most costly and disengaged individuals. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  6. The Science of Cost-Effective Materials Design - A Study in the Development of a High Strength, Impact Resistant Steel

    Science.gov (United States)

    Abrahams, Rachel

    2017-06-01

    Intermediate alloy steels are widely used in applications where both high strength and toughness are required for extreme/dynamic loading environments. Steels containing greater than 10% Ni-Co-Mo are amongst the highest strength martensitic steels, due to their high levels of solution strengthening, and preservation of toughness through nano-scaled secondary hardening, semi-coherent hcp-M2 C carbides. While these steels have high yield strengths (σy 0.2 % >1200 MPa) with high impact toughness values (CVN@-40 >30J), they are often cost-prohibitive due to the material and processing cost of nickel and cobalt. Early stage-I steels such as ES-1 (Eglin Steel) were developed in response to the high cost of nickel-cobalt steels and performed well in extreme shock environments due to the presence of analogous nano-scaled hcp-Fe2.4 C epsilon carbides. Unfortunately, the persistence of W-bearing carbides limited the use of ES-1 to relatively thin sections. In this study, we discuss the background and accelerated development cycle of AF96, an alternative Cr-Mo-Ni-Si stage-I temper steel using low-cost heuristic and Integrated Computational Materials Engineering (ICME)-assisted methods. The microstructure of AF96 was tailored to mimic that of ES-1, while reducing stability of detrimental phases and improving ease of processing in industrial environments. AF96 is amenable to casting and forging, deeply hardenable, and scalable to 100,000 kg melt quantities. When produced at the industrial scale, it was found that AF96 exhibits near-statistically identical mechanical properties to ES-1 at 50% of the cost.

  7. Critical operations capabilities in a high cost environment: a multiple case study

    Science.gov (United States)

    Sansone, C.; Hilletofth, P.; Eriksson, D.

    2018-04-01

    Operations capabilities have been a popular research area for many years and several frameworks have been proposed in the literature. The current frameworks do not take specific contexts into consideration, for instance a high cost environment. This research gap is of particular interest since a manufacturing relocation process has been ongoing the last decades, leading to a huge amount of manufacturing being moved from high to low cost environments. The purpose of this study is to identify critical operations capabilities in a high cost environment. The two research questions were: What are the critical operations capabilities dimensions in a high cost environment? What are the critical operations capabilities in a high cost environment? A multiple case study was conducted and three Swedish manufacturing firms were selected. The study was based on the investigation of an existing framework of operations capabilities. The main dimensions of operations capabilities included in the framework were: cost, quality, delivery, flexibility, service, innovation and environment. Each of the dimensions included two or more operations capabilities. The findings confirmed the validity of the framework and its usefulness in a high cost environment and a new operations capability was revealed (employee flexibility).

  8. Computer-assisted cognitive remediation therapy in schizophrenia: Durability of the effects and cost-utility analysis.

    Science.gov (United States)

    Garrido, Gemma; Penadés, Rafael; Barrios, Maite; Aragay, Núria; Ramos, Irene; Vallès, Vicenç; Faixa, Carlota; Vendrell, Josep M

    2017-08-01

    The durability of computer-assisted cognitive remediation (CACR) therapy over time and the cost-effectiveness of treatment remains unclear. The aim of the current study is to investigate the effectiveness of CACR and to examine the use and cost of acute psychiatric admissions before and after of CACR. Sixty-seven participants were initially recruited. For the follow-up study a total of 33 participants were enrolled, 20 to the CACR condition group and 13 to the active control condition group. All participants were assessed at baseline, post-therapy and 12 months post-therapy on neuropsychology, QoL and self-esteem measurements. The use and cost of acute psychiatric admissions were collected retrospectively at four assessment points: baseline, 12 months post-therapy, 24 months post-therapy, and 36 months post-therapy. The results indicated that treatment effectiveness persisted in the CACR group one year post-therapy on neuropsychological and well-being outcomes. The CACR group showed a clear decrease in the use of acute psychiatric admissions at 12, 24 and 36 months post-therapy, which lowered the global costs the acute psychiatric admissions at 12, 24 and 36 months post-therapy. The CACR is durable over at least a 12-month period, and CACR may be helping to reduce health care costs for schizophrenia patients. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  9. Highly parallel machines and future of scientific computing

    International Nuclear Information System (INIS)

    Singh, G.S.

    1992-01-01

    Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs

  10. Empathy costs: Negative emotional bias in high empathisers.

    Science.gov (United States)

    Chikovani, George; Babuadze, Lasha; Iashvili, Nino; Gvalia, Tamar; Surguladze, Simon

    2015-09-30

    Excessive empathy has been associated with compassion fatigue in health professionals and caregivers. We investigated an effect of empathy on emotion processing in 137 healthy individuals of both sexes. We tested a hypothesis that high empathy may underlie increased sensitivity to negative emotion recognition which may interact with gender. Facial emotion stimuli comprised happy, angry, fearful, and sad faces presented at different intensities (mild and prototypical) and different durations (500ms and 2000ms). The parameters of emotion processing were represented by discrimination accuracy, response bias and reaction time. We found that higher empathy was associated with better recognition of all emotions. We also demonstrated that higher empathy was associated with response bias towards sad and fearful faces. The reaction time analysis revealed that higher empathy in females was associated with faster (compared with males) recognition of mildly sad faces of brief duration. We conclude that although empathic abilities were providing for advantages in recognition of all facial emotional expressions, the bias towards emotional negativity may potentially carry a risk for empathic distress. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  12. Time-driven activity-based costing of low-dose-rate and high-dose-rate brachytherapy for low-risk prostate cancer.

    Science.gov (United States)

    Ilg, Annette M; Laviana, Aaron A; Kamrava, Mitchell; Veruttipong, Darlene; Steinberg, Michael; Park, Sang-June; Burke, Michael A; Niedzwiecki, Douglas; Kupelian, Patrick A; Saigal, Christopher

    Cost estimates through traditional hospital accounting systems are often arbitrary and ambiguous. We used time-driven activity-based costing (TDABC) to determine the true cost of low-dose-rate (LDR) and high-dose-rate (HDR) brachytherapy for prostate cancer and demonstrate opportunities for cost containment at an academic referral center. We implemented TDABC for patients treated with I-125, preplanned LDR and computed tomography based HDR brachytherapy with two implants from initial consultation through 12-month followup. We constructed detailed process maps for provision of both HDR and LDR. Personnel, space, equipment, and material costs of each step were identified and used to derive capacity cost rates, defined as price per minute. Each capacity cost rate was then multiplied by the relevant process time and products were summed to determine total cost of care. The calculated cost to deliver HDR was greater than LDR by $2,668.86 ($9,538 vs. $6,869). The first and second HDR treatment day cost $3,999.67 and $3,955.67, whereas LDR was delivered on one treatment day and cost $3,887.55. The greatest overall cost driver for both LDR and HDR was personnel at 65.6% ($4,506.82) and 67.0% ($6,387.27) of the total cost. After personnel costs, disposable materials contributed the second most for LDR ($1,920.66, 28.0%) and for HDR ($2,295.94, 24.0%). With TDABC, the true costs to deliver LDR and HDR from the health system perspective were derived. Analysis by physicians and hospital administrators regarding the cost of care afforded redesign opportunities including delivering HDR as one implant. Our work underscores the need to assess clinical outcomes to understand the true difference in value between these modalities. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  13. A computational study of highly viscous impinging jets

    International Nuclear Information System (INIS)

    Silva, M.W.

    1998-11-01

    Two commercially-available computational fluid dynamics codes, FIDAP (Fluent, Inc., Lebanon, NH) and FLOW-3D (Flow Science, Inc., Los Alamos, NM), were used to simulate the landing region of jets of highly viscous fluids impinging on flat surfaces. The volume-of-fluid method was combined with finite difference and finite element approaches to predict the jet behavior. Several computational models with varying degrees of physical realism were developed, and the results were compared with experimental observations. In experiments, the jet exhibited several complex behaviors. As soon as it exited the nozzle, the jet began to neck down and become narrower. When it impacted the solid surface, the jet developed an instability near the impact point and buckled to the side. This buckling became a spiraling motion, and the jet spiraled about the impact point. As the jet spiraled around, a cone-shaped pile was build up which eventually became unstable and slumped to the side. While all of these behaviors were occurring, air bubbles, or voids, were being entrapped in the fluid pool. The results obtained from the FLOW-3D models more closely matched the behavior of real jets than the results obtained from /the FIDAP models. Most of the FLOW-3D models predicted all of the significant jet behaviors observed in experiments: necking, buckling, spiraling, slumping, and void entrapment. All of the FIDAP models predicted that the jet would buckle relatively far from the point of impact, whereas the experimentally observed jet behavior indicates that the jets buckle much nearer the impact point. Furthermore, it was shown that FIDAP is incapable of incorporating heat transfer effects into the model, making it unsuitable for this work

  14. A computational study of highly viscous impinging jets

    Energy Technology Data Exchange (ETDEWEB)

    Silva, M.W. [Univ. of Texas, Austin, TX (United States). Dept. of Mechanical Engineering

    1998-11-01

    Two commercially-available computational fluid dynamics codes, FIDAP (Fluent, Inc., Lebanon, NH) and FLOW-3D (Flow Science, Inc., Los Alamos, NM), were used to simulate the landing region of jets of highly viscous fluids impinging on flat surfaces. The volume-of-fluid method was combined with finite difference and finite element approaches to predict the jet behavior. Several computational models with varying degrees of physical realism were developed, and the results were compared with experimental observations. In experiments, the jet exhibited several complex behaviors. As soon as it exited the nozzle, the jet began to neck down and become narrower. When it impacted the solid surface, the jet developed an instability near the impact point and buckled to the side. This buckling became a spiraling motion, and the jet spiraled about the impact point. As the jet spiraled around, a cone-shaped pile was build up which eventually became unstable and slumped to the side. While all of these behaviors were occurring, air bubbles, or voids, were being entrapped in the fluid pool. The results obtained from the FLOW-3D models more closely matched the behavior of real jets than the results obtained from /the FIDAP models. Most of the FLOW-3D models predicted all of the significant jet behaviors observed in experiments: necking, buckling, spiraling, slumping, and void entrapment. All of the FIDAP models predicted that the jet would buckle relatively far from the point of impact, whereas the experimentally observed jet behavior indicates that the jets buckle much nearer the impact point. Furthermore, it was shown that FIDAP is incapable of incorporating heat transfer effects into the model, making it unsuitable for this work.

  15. High threshold distributed quantum computing with three-qubit nodes

    International Nuclear Information System (INIS)

    Li Ying; Benjamin, Simon C

    2012-01-01

    In the distributed quantum computing paradigm, well-controlled few-qubit ‘nodes’ are networked together by connections which are relatively noisy and failure prone. A practical scheme must offer high tolerance to errors while requiring only simple (i.e. few-qubit) nodes. Here we show that relatively modest, three-qubit nodes can support advanced purification techniques and so offer robust scalability: the infidelity in the entanglement channel may be permitted to approach 10% if the infidelity in local operations is of order 0.1%. Our tolerance of network noise is therefore an order of magnitude beyond prior schemes, and our architecture remains robust even in the presence of considerable decoherence rates (memory errors). We compare the performance with that of schemes involving nodes of lower and higher complexity. Ion traps, and NV-centres in diamond, are two highly relevant emerging technologies: they possess the requisite properties of good local control, rapid and reliable readout, and methods for entanglement-at-a-distance. (paper)

  16. Pulmonary leukemic involvement: high-resolution computed tomography evaluation

    International Nuclear Information System (INIS)

    Oliveira, Ana Paola de; Marchiori, Edson; Souza Junior, Arthur Soares

    2004-01-01

    Objective: To evaluate the role of high-resolution computed tomography (HRCT) in patients with leukemia and pulmonary symptoms, to establish the main patterns and to correlate them with the etiology. Materials and Methods: This is a retrospective study of the HRCT of 15 patients with leukemia and pulmonary symptoms. The examinations were performed using a spatial high-resolution protocol and were analyzed by two independent radiologists. Results: The main HRCT patterns found were ground-glass opacity (n=11), consolidation (n=9), airspace nodules (n=3), septal thickening (n=3), tree-in-bud pattern (n=3), and pleural effusion (n=3). Pulmonary infection was the most common finding seen in 12 patients: bacterial pneumonia (n=6), fungal infection (n = 4), pulmonary tuberculosis (n=1) and viral infection (n=1). Leukemic pleural infiltration (n=1), lymphoma (n=1) and pulmonary hemorrhage (n=1) were detected in the other three patients. Conclusion: HRCT is an important tool that may suggest the cause of lung involvement, its extension and in some cases to guide invasive procedures in patients with leukemia. (author)

  17. Computer-aided control of high-quality cast iron

    Directory of Open Access Journals (Sweden)

    S. Pietrowski

    2008-04-01

    Full Text Available The study discusses the possibility of control of the high-quality grey cast iron and ductile iron using the author’s genuine computer programs. The programs have been developed with the help of algorithms based on statistical relationships that are said to exist between the characteristic parameters of DTA curves and properties, like Rp0,2, Rm, A5 and HB. It has been proved that the spheroidisation and inoculation treatment of cast iron changes in an important way the characteristic parameters of DTA curves, thus enabling a control of these operations as regards their correctness and effectiveness, along with the related changes in microstructure and mechanical properties of cast iron. Moreover, some examples of statistical relationships existing between the typical properties of ductile iron and its control process were given for cases of the melts consistent and inconsistent with the adopted technology.A test stand for control of the high-quality cast iron and respective melts has been schematically depicted.

  18. Automated high speed volume computed tomography for inline quality control

    International Nuclear Information System (INIS)

    Hanke, R.; Kugel, A.; Troup, P.

    2004-01-01

    Increasing complexity of innovative products as well as growing requirements on quality and reliability call for more detailed knowledge about internal structures of manufactured components rather by 100 % inspection than just by sampling test. A first-step solution, like radioscopic inline inspection machines, equipped with automated data evaluation software, have become state of the art in the production floor during the last years. However, these machines provide just ordinary two-dimensional information and deliver no volume data e.g. to evaluate exact position or shape of detected defects. One way to solve this problem is the application of X-ray computed tomography (CT). Compared to the performance of the first generation medical scanners (scanning times of many hours), today, modern Volume CT machines for industrial applications need about 5 minutes for a full object scan depending on the object size. Of course, this is still too long to introduce this powerful method into the inline production quality control. In order to gain acceptance, the scanning time including subsequent data evaluation must be decreased significantly and adapted to the manufacturing cycle times. This presentation demonstrates the new technical set up, reconstruction results and the methods for high-speed volume data evaluation of a new fully automated high-speed CT scanner with cycle times below one minute for an object size of less than 15 cm. This will directly create new opportunities in design and construction of more complex objects. (author)

  19. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  20. Lightweight Provenance Service for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.